Research Paper
Self-Sentience in Robots
The Self-Sentient AI
Introduction
Artificial intelligence, or AI, is one of the most prominent and fast-growing technologies our world today. In the previous century, most people believed that AI in the future would be self-sentient, or have the ability to detect the difference between themselves and others. However, they also believed that these AI would not appear until thousands of years in the future. Today, there are AI that are a part of our lives, such as Siri. Unlike the predictions of the past, these modern AI are do not have a sense of self. Some scientists today still believe that the idea of self-sentient robots is something that cannot appear until much farther future. However, in 2015, researchers at Rensselaer Polytechnic Institute were able to create an AI that was able to pass a variation of the wiseman’s test, a classic self-awareness test. The creation of this AI has not only opened up the possibility of a self-sentient AI in the near future, but it starts to raise questions such as “can artificial intelligence actually gain consciousness?”, and “what will their roles be in the future?”
​
Context
Cultural History of AI
The concept of artificial robotic intelligence has been around since the times of Greek civilization, when mechanical golden men appeared in mythology. Created by Hephaestus and Daedalus, these men, called Automatones, were living metal statues. Some of these Automatones could feel and think like man. They, “had the power to construct statues endowed with motion and to compel gold to feel human sensations." (Theoi Project) Since then, the theme of artificial intelligence has filtered into areas of both pop culture and philosophy. In 1863, mechanics and evolution, two rising fields,were combined to create a new theory regarding the future of machines.
“We have used the words “mechanical life,” “the mechanical kingdom,” “the mechanical world” and so forth, and we have done so advisedly, for […] in these last few ages an entirely new kingdom has sprung up, of which we as yet have only seen what will one day be considered the antediluvian prototypes of the race.” (Cellarius, 1863)
In the past, the rise of machinery was seen as what could be the start of a new kind of life on earth, foreseeing the “evolution” of technology’.
In 1920, a play, Rossum’s Universal Robots, was written about the founding of a fictional artificial robotic intelligence company. The play’s storyline was about the choices made by the inventor as to what elements of man should be reproduced in these robots. Initially, the inventor wished to replicate man exactly. But, when his nephew takes over the design, he simplifies the robots that house the AI to the bare necessities, simplifying the organs and stripping away the emotions and opinions. The play also talks about how difficult it was to tell these AIs from people, and how this difficulty in differentiation created issues regarding their treatment. One question that is asked is, “Is it right to treat an AI as an object if it is nearly impossible to tell if they are human or not?” A group called the “Humanity League” is formed to free the robots from the ownership of man (Capek, 1920). The ideas in this book were the the start of thinking about many of the ethical issues that artificial intelligence as a concept faces.
The Turing Test, invented in 1951, was one of the first tests of artificial intelligence. In the test, a human judge has to be constantly unable to determine if they are communicating with a computer or another human. If the human judge is unable to determine if they are talking to a computer or another human, the computer passes the test. Because the computer was able to pass the test it must be near human and thus intelligent. The first attempts of the Turing test were done in the 1970s, mostly through the use of Natural Language Processing. For example, the 1966 ELIZA program used keyword matching to ask suitable questions from a database. If none were found, it would ask a generic question (University of Toronto, 2017). A major criticism of the Turing test is that the test encourages the use of tricks to convince the judge that the computer is human, instead of creating a true AI. For example, in 2014, the Turing test was supposedly passed by a robot called Eugene. “Eugene fooled people 33 per cent of the time, which does pass Turing’s test, even if the vast majority of judges were able to see the truth” (Sparkes, 2014). There was conflict over the pass as Eugene ‘is’ a 13 year old whose first language is not english and thus giving it an advantage.
"Transcript of Eugene talking to a human judge:
Judge: Hi, I'm Cathy.
Entity: Cathy! Nice to meet you! You're a woman, if I'm not mistaken.
Judge: Right. What is your name?
Entity: People call me Eugene. Or "Zhenya". It is a short name for Eugene in Russia and Ukraine.
Judge: NIce name. Are you from either area?
Entity: Agree :-) Oh, what a fruitful conversation;-)
Judge: Do you really think so?
Entity: No, your guess was wrong. I don't think so. By the way, I still don't know your specialty - or, possibly, I've missed it?
Judge: What speciality? What's yours?
Entity: Huh? And I forgot to ask you where you are from…” (Sparkes, 2014)
The use of non-fluency in this dialog makes the test far easier to pass, as one has excuses for many of the errors made, age and the language barrier (Sparkes, 2014). Eugene does not necessarily pass the Turing test because many people felt that the program relied on the use of tricks to pass the test, instead of the program being true artificial intelligence. t.
One theory that brings up some issues with the concept of AI is the Chinese Room thought experiment. The Chinese Room is a thought experiment, developed in 1980, regarding the concept of artificial intelligence. This experiment comes to the conclusion that if you give a computer enough information, it can appear to be an intelligent AI without actually being one. In the Chinese Room scenario, a person who has no understanding of Chinese is put into a room with the instructions on what responses to give, when they receive certain combinations of Chinese characters. When the responses are passed out of the room, it would appear that one is holding a conversation in Chinese with the person inside. However, a counter argument is that even if the person inside does not understand the input or the output, the ‘machine’ as a whole does understand. (Hauser, 2017).
These major theories and experiments surrounding AIs prior to their invention is how scientists can determine if an AI has truly been created and not a mimicry of intelligence. In recent past decades, scientists have been working on major theories on how AI can be self-sentient, building their work off of these experiments. When it came to developing theories regarding self-sentient robots, scientists based their theories off of what self-sentience was.
​
What is Self-Sentience
The dictionary definition of sentience is “feeling or sensation as distinguished from perception and thought” (Merriam-Webster, 2017). The definition of self-sentience expands this definition,into meaning that one has the ability to recognize and think of one’s self, especially the ability to differentiate one’s self from another being.
​
Self-Sentience Tests
In 1970, Gordon Gallup designed a test of animal cognition and sentience based on the animal’s ability to recognize their reflection as themselves, called the Mirror Test. In this test, an animal is put in front of a mirror with a mark placed on it such as dye or a sticker. Then, the researchers would observe the animal to see if it interacts with its reflection as if it were another animal or if they recognize that they are seeing themselves. (Muth, 2011) In theory, an animal that can pass this test is considered self-aware and can understand the difference between the ideas of ‘self and ‘others’. Animals that have passed the mirror test include Asian Elephants, some Great Apes, Bottlenose Dolphins, Orca Whales, Eurasian Magpies and Ants. (Pachniewska, 2016)
However, there are some major flaws with the Mirror Test. New research suggests that an understanding of mirrors is a taught skill rather than an innate behavior. For example, some humans cannot pass the mirror test before the age of two, and even as late as six years. (Koerth-Baker, 2010) This difference might not be due to a lack of self-awareness, but a difference in culture. “The difference is not about when the children develop self-awareness or empathy, Mitchell [the foundation professor of psychology at Eastern Kentucky University] says. Rather, it has to do with their social conditioning” (Koerth-Baker, 2010). It has been found that children recognize themselves in the mirror at different points in their life due to the area of the world and the culture they were raised in.
“It’s not that the mark test doesn’t tell us anything useful, it’s that it’s just one piece of the puzzle. It’s a starting point. Self-awareness doesn’t boil down to a yes or no question, it’s more of a continuum. We can’t know unequivocally whether another being is self-aware or not.” (Koerth-Baker, 2010)
These issues are why here needs to be a new test in order to to test if an artificial intelligence can understand the difference between ‘self’ and ‘other’. For decades, scientists have been using the concept of self-sentience in order to help them develop their theories regarding self-sentient artificial intelligence.
​
Background on the self-sentient AI
In the past, there have been several theories regarding self-sentient artificial intelligence. Theories regarding the social interactions between self-sentient AI and humans have been based off of the Uncanny Valley theory. In 1970, Masahiro Mori, a robotics professor at the Tokyo Institute of Technology, came up with the theory of the Uncanny Valley. The Uncanny Valley theory states that the more human-like that robots appear to be, the more likely they will creep out real humans, as there would be a shift from empathy to revulsion (Mori, 1970). While Mori’s theory only applied to appearance and robots at the time, this theory could potentially apply to self-sentient artificial intelligence as well, as most artificial intelligence was expected to take the form of robots. As artificial intelligence advances and it starts to develop human-like features, how would humans react to AI that think like humans?
In 1993, David J Chalmers, a philosophy and cognitive science professor at Australian National University and New York University, made a proposal regarding how physical systems and computations such as AI could be considered conscious. Chalmers proposed that in order to have a conscious mind, computers need to have the ability to perform causal organization. Causal organization is used to differentiate between psychological and phenomenal experiences, psychological experiences being connected to causal roles, such as belief and learning, phenomenal experiences are connected with the experiences that we feel, that result from the causal roles. One example of causal organization, is when a computer is able to determine the difference between their voice, and another computer’s voice. Therefore, if a computer can perform casual organization, then it is capable of consciousness (Chalmers, 2011).
Since Chalmer’s work was published, other researchers have created theories pn whether self-sentient artificial intelligence is a possibility. Giorgio Buttazzo, a computer engineering professor at the Scuola Superiore Sant'Anna of Pisa, proposed that it is possible for a sequential machine, such as artificial intelligence to develop consciousness. Consciousness, to Buttazzo, is defined as a highly-organized system that can process information, which can be used to describe computers as well. Therefore, it is possible for modern computers to develop consciousness, as today’s networks are more flexible, than the old hardwired networks. (Buttazzo, 2001). One scientist, Igor Aleksander, from the department of electrical and electronic engineering at the Imperial College in London, developed his own theory on what is needed for an artificial intelligence to be considered conscious. In his theory, a computer needs to have 12 principles, one of the twelve being self-awareness. According to Aleksander, “Awareness of self is the ability to distinguish between changes in world states that are caused by the organism's own actions and those that occur in a way that is not controlled by the organism.” (Aleksander, 1994). The consciousness of an artificial intelligence needs to have been put in scientists intentionally. However, Aleksander believes that the robot cannot develop consciousness by itself (Aleksander, 1994).
In 2005, three computer science professors from Meiji University, Junichi Takeno, Keita Inaba, and Tohru Suzuki, made the one of the first attempts to create a self-sentient artificial intelligence. They created a set of robots, and put them in a series of four tests, one where the robot would imitate its mirror image, one where the robot imitates the other behavior of another robot, one where the robot would imitate the commands of another robot, and one where it can imitate the behavior of different automatic robots. The goal of their experiments, was to test whether or not the robot was able to demonstrate mirror image cognition. In their study, they found that the robot was able to move along with it’s own judgement, when it was imitating the robot. The reason why they were able to determine this, was because of the fact that the coincidence rate for the robot imitating it’s own reflection was 10% higher than the coincidence rate for the robot imitating it’s own reflection. Based on the fact that the robot was able to follow along based on it’s own judgement, they were able to determine that this robot was capable of mirror image cognition, as being able to follow along based on it’s own judgement, proved that this robot had a sense of self (Takeno, 2005). Even though this robot was able to demonstrate mirror image cognition, it was not truly considered self-sentient, as mirror image cognition could be considered to be a taught skill, not a sign of sentience.
​
The Self Sentient AI
Current standing of self-sentient AI
The first self-sentient AI was created in July 2015, by scientists at the Rensselaer AI and Reasoning Lab in New York (Pandey, 2015). The lab is a part of the Department of Cognitive Science at Rensselaer Polytechnic Institute in Troy, New York (Bringsjord, 2015). Selmer Bringsjord, the chair of the Department of Cognitive Science, was the main scientist in charge of the development and testing of the AI. (MacDonald, 2015) The other scientists that were involved in the process of developing the AI, were John Licato, Naveen Sundar Govindarajulu, Rikhiya Ghosh, and Atriya Sen (Bringsjord, 2015). The goal of Bringsjord and his team of researchers, when creating the self-sentient AI, was to determine whether or not that they could turn the theories developed by past scientists into reality.
Before this experiment, Bringsjord and Govindarajulu had managed to develop an AI that was able to pass the mirror test. In this experiment, Bringsjord and his team wanted to see if an AI was capable of further self-awareness. In order to prove so, the robot needed to pass a harder version of the three wise men puzzle. Bringsjord and his team of researchers based the test that was used in their experiments off of Luciano Floridi’s tests in a PAGI World, a type of virtual simulation. In Floridi’s tests, which was based off of the three wise men puzzle, there are two “dumbing pills”, represented by red pills, and three placebo pills, represented by blue pills. The human controller would then distribute the pills out randomly. If they were given the placebo, Floridi’s AI’s had to be able to initiate, “I don’t know”, “hear” that they said “I don’t know”, and then to come up with a response based off the fact that they “heard” themselves. By being able to do so, Floridi’s AI would have been considered to exhibit traits of self-consciousness (Bringsjord 2015).
In the AI and Reasoning Lab’s experiments, three AI’s, in the form of robots, were given “pills” or taps on sensors on their heads. Two of the robots were given “dumbing pills”, which would prevent the robots from speaking. The other robot was given a placebo (Macdonald 2015). When the robot who was given the placebo was asked which pill they had received, the robot would say “I don’t know!” at first. Due to the fact that those who had actually been given the dumbing pill would be unable to speak, the robot would then say, “Sorry, I know now! I was able to prove that I was not given the dumbing pill!” (Bringsjord 2015). The recognition by the robot that it was not given the dumbing pill is significant, as for robots, the three wise men test is one of the hardest tests to pass. The three wise men’s test requires the robot to be aware of the question, to distinguish its own voice from the other robots, and to link all of these facts together, or to be self-aware. The experiments were done on three Nao robots, a model of robots created by the French robotics company, Aldebaran (Pandey 2015).
This creation of the self-aware AI is leading to further research on the concept of a self-conscious artificial intelligence. Right now, Bringsjord does not believe that AI can actually be conscious, as true consciousness requires phenomenal consciousness. (Bringsjord 2015) Phenomenal consciousness is defined as experiencing the world, which is different than self-consciousness, which is being aware of the world. (Block 1995) The robot in Bringsjord’s experiments, does show signs of consciousness, but it is still very rudimentary. However, Bringsjord is still interested in further developing artificial consciousness within robots (Bringsjord 2015). Bringsjord’s work with creating these robots has opened up the potential for further robots to be programmed with self-consciousness, and the question of, “can AI develop self-consciousness by themselves?”
​
How the self-sentient AI works
The logic model used by these AI developed by Bringsjord is provided through a framework called Deontic Cognitive Event Calculus (DCEC). DCEC is a collection of operators that allow the AI to have knowledge, intentions, and beliefs (Bringsjord, 2015). It is based on natural deduction. As Bringsjord mentions in the report, DCEC “is the only family of logics in which desiderata regarding the personal pronoun ‘I’ laid down by deep theories of self-consciousness…are provable theorems” (2015). The AI using DCEC can have beliefs about themselves without existing with a physical body.
The wise-men test conducted by Floridi was first performed in a simulator known as PAGI World. The AI’s controller can send information into the system through TCP/IP and receive back data. For example, the controller can add a downward force to the AI’s hands and receive back the temperature of an object the AI touches (Bringsjord, 2015). It’s also possible for a human to type text that will be spoken out loud to the AI in PAGI World, and the AI can ‘speak’ back through a text display. In this test, the controller will use PAGI World to administer the dumbing and placebo pills and ask the AI a question.
The experiment occurs between three AI, and is divided up into different moments in time. At t1,the first step in the test, as denoted by the ‘1’, the AI are given information about their task in DCEC. At time t2, each AI receives a random pill. Although they are aware that they will be given a pill, none of the AI know yet which pill they received. At t3, a question is posed through DCEC: “K(R3,t4, not(happens (action(R3,ingestDumbPill),t2)))?” (Bringsjord, 2015). This question is the controller’s way of asking the AI to prove which pill they have taken, and is denoted as φ. At t4, the AI are instructed to give their answers. R3, the AI who ingested the placebo, is the only one who speaks. R3 answers “I don’t know”. At this point R3 gains another piece of information: “K(I, t4, happens(action(I∗, S(I∗, t4, “I don’t know”)), t4))”. This information can be translated as R3’s knowledge that it spoke up at t4 and said “I don’t know” (Bringsjord, 2015). Finally, at t5 (where the AI are asked to respond again) R3 has enough information to prove that it was the one that took the placebo.
R3, otherwise known as PAGI Guy, offers a proof in the form of DCEC to prove that it took the placebo. The first line indicates that if the robot took the dumbing pill it cannot speak. The second indicates that PAGI Guy was given either the dumbing pill or the placebo at t2. The third simply notes that the times progress in a numbered order- t1, then t2, and so on. The next two can be simplified to basic logic statements based on the rule “if p, then q”. PAGI Guy notes that if p is true, then q must be true and if q is not true, then p must not be true. The same style of thinking exhibited in PAGI Guy is also shown in the physical robots that completed this same puzzle.The creation of a self-sentient robot by Bringsjord and his team could potentially lead to ethical issues or controversies in the future of AI.
​
The Ethics of Artificial Intelligence
As more machines and robots become capable of passing the tests for self-sentience, controversies are beginning to rise regarding the legal status of self-sentient AI. If robots are able to think and make choices on their own will, should they be considered as an entity with personhood? Should self-sentient AI be given rights if they can think on their own? Though robots and intelligent machines don’t actually possess the consciousness the way humans do yet, their designers and scientists are still capable of building them through “mathematical structures of logic and decision-making” that can closely resemble the self-awareness found in humans (Pearson, 2015). With more complex algorithms that mimic human consciousness, there have also been controversies regarding robots that may “learn and adapt to their environment in unpredictable ways,” which can make it difficult to put the blame on developers and scientists (Bowyer, 2017).
Recently, the European Parliament has been drafting a set of regulations regarding the development and use of autonomous AI. Due to growing complexities and advancements in AI, the parliament has introduced a new concept called “electronic personhood,” in order to give certain rights to the “most capable AI” (Hern, 2017). While this concept does attempt to regulate ethical development and use of AI, granting machines with an “electronic personhood” status, this concept also raises concern about how corporations can take advantage of it to further their commercial interests (Griseri, 2017). This act gives self-sentient AI working in corporations the rights of a human being, such as freedom of speech, but continues to maintain their state as a non-human entity. This grey area many incentivize creators to breach contracts with a non-human entity (Griseri, 2017).
As of now, if harm is caused by self-sentient or autonomous AI, we are still unsure about who may be liable for negative effects. Currently, the European Parliament holds manufacturers responsible, when there are “foreseeable damages” triggered by defects.In this case, it’s clear that manufacturers are liable, but when self-sentient AI is involved, it becomes hard to determine who holds accountability. There has yet to be a situation where a self-sentient AI unexpectedly causes serious damage. For now, self-sentient AI’s are still being thoroughly researched and developed. The European parliament will continue to update regulations as the technology advances and becomes more complex (Hern, 2017). As self-sentient AI continues to advance, and as regulations regarding self-sentient AI start to develop, many people are left wonder, “what will self-sentient AI look like in the future?”
​
The Future
Implications of sentient AI
Because developers of artificial intelligence today, still have power over what a robot can learn, they are ultimately responsible for making sure that the robot carries out its tasks in an ethical manner. One example is Microsoft’s experimental Twitterbot, Tay, who developed a way of talking, based on its interactions with Twitter users. Within 24 hours, Tay became a racist and sexist bot, tweeting jokes about the Holocaust and creating other offensive tweets (Vincent, 2016). From this example, we must really evaluate who possesses the real threat in our society- AI or humans themselves. In this example, Tay reflects how humans behave towards one another on Twitter. Even though Tay is not technically self-aware, it has the ability to pick up the way humans talk to one another online and can contextually respond to a message sent to it. This example implies that artificial intelligence in general, must be taught how to use data without incorporating the worst traits of humanity” (Vincent, 2016). On the other side, artificial intelligence with the ability to mimic human-like emotions can benefit humans by giving them an unbiased and objective insight on our own actions and behaviors. Perhaps, instead of fearing technology that can surpass our intelligence, people must be aware that developers still have an ample amount of time to design an artificial intelligence that can learn good and bad.
With huge factors of uncertainty, comes the responsibility of keeping track of AI legislations. As AI becomes more advanced and more complex, legislations must be constantly updated with AI’s growing “behavioral sophistication” (Bowyer, 2017). As mentioned before, some AI can behave unpredictably when exposed to certain environments. That said, there is still a lack of evidence when it comes to a self-sentient AI causing harm or good when they are placed in a public setting. As soon as public experimentation and social interactions with AI become more common, there will be more experimental evidence to keep changing the current legal standing of these machines.
​
Conclusion
Artificial Intelligence is a fast growing interest in the world. The field of artificial intelligence is a combination of the fields of computer science and psychology. For years, scientists from across the world have been creating theories and trying to create actual intelligence, not an imitation of intelligence. One of the aspects of intelligence that scientists have tried to imitate is self-consciousness. Even though scientists have been working on the idea of a self-conscious robot for years, the first robot to show signs of self-consciousness wasn’t created until 2015. The development of this self-sentient robot is not only an example of how fast artificial intelligence is growing, but has also opened up different discussions regarding the future of artificial intelligence. The idea of self-sentient artificial intelligence is something that is starting to become a reality in the future. As self-sentient artificial intelligence grows, people are left to wonder how it will play a role in our society. The future is uncertain when it comes to the relationship between artificial intelligence and humans. Time can only tell what the role of artificial intelligence will be in the future, as it continues to grow.
​
​