By Galen Barbour
Have you ever loved anything unconditionally? If so how could you be sure? The question has been a debate of philosophers and experts of the mind for a millennium. Now the burden has fallen on the desk of scientists working on Artificial Intelligence. Quite possibly the most serious and provocative advancement of technology now paws at what could be the final gate between human and machine: our emotions. At a recent lecture held by Consciousness Hacking, doctors Julia Mossbridge (IONS) and Ben Goertzel (OpenCog/Hanson Robotics) discuss their recent efforts to enable our machines to feel love. The irony? They may teach us more about this emotion than we know ourselves.
It’s a balmy Wednesday night in the SoMa district of San Francisco. We are at an event hosted by Consciousness Hacking. A group of technophiles whom employ the use of wearable technology such as biometric sensors and EEG machines are spread throughout the room. They got their start in San Francisco but the group is an active online community with over 3700 Meetup members spread over the world.
The head organizer, Mikey Seigel, looks better suited to be found on a spiritual retreat center, encouraging people to unplug rather than be under the fluorescent ambiance of office lights; everyone plugged in. But don’t let the bare feet and eccentric hair fool you. Mr. Seigel is pragmatically forward thinking in ways that misses much of consumer-oriented tech culture.
By wearing devices that read your heart rate, blood pressure and brain wave activity, the Consciousness Hackers gain insight into how our body copes to certain emotions and mental states. Mr. Seigel Postulates, that by first visualizing our emotions we become more aware and able to alter them. By stabilizing your emotional and physical states you are then able to exude that balance and insight to the world around you, effectively making the world a healthier place.
“We have a real responsibility to ensure that the technology in our pockets, (that we’re) wearing on our wrists, soon going to be covering our eyes, implanted in our bodies. That the technology is supporting humanity in the deepest and most profound way possible.”
Our discussion tonight aims to incorporate that goal into the design of A.I. A technology of which, at the moment, does all of our finding for us. Food, shelter, clothes, drugs, partners, rides, movies, music…the list is indefinite.
We are joined by Julia Mossbridge and Ben Goertzel to discuss a project they are working on which infuses A.I. with the ability to love in the better interest of humanity. It began with Goertzel’s involvement with Hanson Robotics, known internationally for their freakishly life-like robots. Goertzel’s wide acclaim for his open-source approach to Artificial Intelligence has garnered lots of attention and progress towards his work.
Dr. Mossbridge is the founder of The Mossbridge Institute and creator of the Choice Compass, a guidance app that helps with those hard to make decisions. Mossbridge has focused her studies into the science of emotion, working closely with IONS (Institute of Noetic Sciences). With Hanson as the body, Goertzel behind the mind, and Mossbridge providing the heart, this love-enabled AI could be the most developed Human-like A.I. in creation.
Mossbridge elaborated that by enabling an AI to love it could “Enhance the well being of all beings. Humans interacting with an A.l. in this state are likely to feel increased unconditional love and are more likely to take actions to promote the well being of themselves and others.”
This is where it gets sticky, by this definition of love how can we really know the program actually loves? We ourselves have not all come to an agreement of what love is. How can we make sure that the love being promulgated by the program is genuine and not just an elaborate simulated copy of emotion? “The reason why (this) is so hard is because we don’t know how to do this with other beings.” In the end she theorizes that we must rely on trust. Followed by further analysis.
This too becomes sticky because we validate one unknown with another. Although these feelings are tacitly understood amongst us, they are still not well defined by science. And here you enter into the crux of the A.I. community; the split between Artificial intelligence (A.I.) and Artificial General Intelligence (A.G.I.).
A rough definition between the two is that A.I. (also known as weak A.I. or reductive A.I.) is a program that works to reduce the number of possible outcomes in a domain based on certain parameters. They work toward a single function and are confined through the input of their domains, their answers are fielded through specific algorithms. This technology is the magic behind our sourcing apps (Netflix, Google, Pandora etc…) and although powerful at processing complex requests, they are only capable of executing queries relative to their algorithms.
A.G.I. differs from this model in that it can complete novel tasks through autonomous decision-making. It does not work through the confines of any single reductive algorithm but rather a multi dimensional layering of various algorithms.
In Goertzel’s words his A.G.I. uses, amongst other things, a “hyper graph of probabilistic logic engines, evolutionary learning engines, pattern mining, neural net mining and imitation learning.” In this way we can think of A.G.I. as a multi faceted decision-making engine, instead of a rigid A.I. system that processes huge amounts of data.
To take a closer look at the grey area between these two lets take the complex A.I. that finds profitable emerging markets for investment firms. Many of which use A.I. to web crawl the internet on forums and product feedback pages. Using natural language processing, the crawler picks out words that the program recognizes as emotions. The program then funnels these emotions into a list associated with the product being discussed and may include some meta-data on the users such as location, sex etc. The program is then able to decipher the general attitude towards a product or service. However, does it know what that emotion means? The context behind it? No that job remains the domain of humans…for now
Goertzel explains that we have a number of ways in which A.I. is and will be implemented. We can engineer it for combat against one another, brain-washing ourselves into buying things, or to help us in a compassionate manner in which betters the life of the individual and the public. By incorporating the open source system we can assure that no one interest is weighing bias on our decision-tech.
One could only imagine how bad this could swing. But even today we have examples like Tay A.I. that lead to an ominous future.
Tay A.I. was Microsoft’s twitter chat-bot that was pulled off the internet within 24 hours of its release due to its racist, anti-semetic and misogynist language. The program made online headlines garnering apologies from Microsoft and more fear and uncertainty from people already hesitant about the future of thinking programs.
The initial idea was harmless, Microsoft wanted to test an autonomously thinking machine programmed to use the language offered by users in conversation. By placing it online the programmed could receive bottomless and diverse input data. This meant no need for private writers or a breach of privacy to access peoples chat history. However, by talking to misogynist racist assholes, Tay became one. This arguably could have been avoided if the program were open source with the implementation of oversight algorithms that look for these words. As well Tay illustrates the big difference between an A.I. that knows what to say, yet not why.
This incredible failure addresses how far we have yet to go. In reality, there are very few examples of true A.G.I. because the jury is still out on what truly drives free thought. Much of psychology is speculative at best and difficult to prove. It’s an explanation built upon layers and layers of theory. Collectively, these theories paint the best schematic behind what we know as consciousness and free thought (as far as science is concerned). In the quest for true general intelligence in machines scientists are turning psych theory into math then setting these algorithms to motion in virtual spaces known as hyper-graphs (in Goertzel’s Open Cog system it is known as the ‘Atom Space’). This is how we have developed the many overlapping AI paradigms that are implemented in the ongoing quest towards artificial thought. Including but not limited to; probabilistic logic engines, evolutionary learning engines, pattern mining, neural nets and imitation learning.
Without even scratching the surface we can see how there may be conflicting systems. However in this sea of differing paradigms of thought there emerges one very popular theory; thought cannot be divorced from emotion.
A classic example of this being Deitrich Doerner’s Psi model. Psi is an emotional motivation model that explains behavior through social, cognitive or physical demands (or ‘urges’). It was later adopted by Joscha Bach in his Micro Psi model of A.I. Although a strong explanation of behavior it presented fundamental differences to the Open Cog Prime framework. Which explains behavior through a model of action, outcome, memory and reinforcement. Through adopting pieces of Bach’s Micro Psi model which deal with emotion as a motivational factor along the pathway towards decisions and later action Goertzel was able to bolster both Open Cog and Micro Psi into a more Dynamic and comprehensive model aptly named Open Psi which incorporates more potential than either did separately.
This is a dramatically brief example of these two concepts however, it does stress how an open source framework can be useful in including many differing theories under one construct. As we continue to explore and develop our understanding of ourselves a malleable framework which can incorporate the change in this formless science may not only be efficient but vital to achieving an A.G.I. which helps, rather than exploits. As Goertzel put it, “It's not a matter of if it's going to happen, it’s a matter of what its going to look like."