top of page

Self Sentient Artifical Intelligence

Our team investigated the topic of self-aware robots and artificial intelligence. Self-aware robots have been one of the most controversial topics of research amongst scientists. Since the 1950s, robots have been portrayed as futuristic machines that can perform complex tasks. While robots have generally been considered as intelligent and valuable machines, many movies have depicted the detrimental impacts of self-aware robots (Buttazzo, 2001). This has led to controversies about the current development of artificial intelligence and machine learning. Recently, scientists have created a robot, that has passed the classic self-awareness test (MacDonald, 2015). This discovery is considered a victory for scientists studying artificial intelligence, as the self-awareness test is considered one of the hardest tests for artificial intelligence to pass. With these discoveries, scientists have begun to question what the role of the artificial intelligence would be, and whether or not its abilities can benefit or harm our society.

The Test is Passed

In these tests, are there are two “dumbing pills” and a placebo, and the human controller would distribute the pills out randomly. If they were given the placebo, the robots had to be able to initiate, “I don’t know”, “hear” that they said “I don’t know”, and then to come up with a response based off the fact that they “heard” themselves. By being able to do so, the robots could have been considered to exhibit traits of self-consciousness (Bringsjord 2015).

​

The robot was first tested in a simulated environment through a system called PAGI World. In PAGI World, the user could communicate with the AI through text or interact with the AI by dragging items or the AI around in the world. The AI themselves ran on a logic model called Deontic Cognitive Event Calculus (DCEC). DCEC allows the robot to consider knowledge, intentions, and beliefs as well as reason through natural deduction. PAGI Guy, the AI who was tested, was given a placebo pill and provided a proof to prove this when asked. It was able to answer incorrectly, understand that it had spoken, and then make the connection that it had not taken a dumbing pill.

​

Self Sentient AI

From Theory to Reality

Pre AI Theory

Theories created by scientists regarding the creation of self-sentient AI have been around for about 20 years.  However, theories regarding AI-human interactions have been around for almost 50 years. One example is the uncanny valley, developed by Masahiro Mori in 1970, which hypothesized that there is a certain point in which if a robot looks and acts human-like, it will scare off a human. Some of the scientists who have discussed the potential of self-sentient, are David J Chalmers, Giorgio Buttazzo, and Igor Aleksander. In 2005, three computer science professors from Meiji University, Junichi Takeno, Keita Inaba, and Tohru Suzuki, were able to develop a robot that was able to pass the classic mirror test.

Current AI

In July 2015, a robot from the Rensselaer AI and Reasoning Lab in New York, a lab based on Rensselaer Polytechnic Institute's department of Cognitive Science. The development and research of these robots were done by a team of scientists composed of John Licato, Naveen Sundar Govindarajulu, Rikhiya Ghosh, and Atriya Sen, who were all lead by Selmer Bringsjörd. In their tests, they gave three NAO robots pills, one of these pills being a placebo, and the rest of the pills would render the robots silent. When the robots were asked which of them was given the pill, one of them said, "I don't know!" It then said, "Sorry, I know now! I was able to prove that I was not given the dumbing pill!" The robot was able to say this statement because the robot was able to be self-aware. In order to be be self-aware, the robot needs to be aware of the question being asked, have the ability to distinguish its own voice from other robots, and have the ability to link all of these parts together. The creation of these robots, have lead to questions and ideas regarding the further development of these self-sentient AI.

The Future of AI

We must understand that unpredictable consequences may occur when robots become more autonomous. For now, it’s hard to determine whether the robot or the creator is liable when the robot behaves and learns unpredictability when it is exposed to a certain environment. With unpredictability, a carefully drafted code of conduct is absolutely necessary to ensure ethical production, design, and use of autonomous AI. Most significantly, there needs to be more research on the topic because true self-sentience in AI has not fully developed and creators still have a lot of control over it.

Self- Awareness

What is it?

Before we can discuss the self-sentient robot we need to define self-sentience, we define it as 

​

“The feeling or sensation of one’s self as distinguished from perception and thought of others” 

 

It is difficult to truly determine self-sentience and a variety of tests have been used. In the past, the most commonly used test was the Mirror Test but it has been determined to be a test of learned behavior rather than self-sentience. 

 

What is used today is the Wise Man Test. In this test, there are 3 entities, and each is given a ‘pill’. One of the pills is a placebo and the other 2 mute the robot. To pass the test the receiver of the placebo must be able to determine this.

AI

Ethics

Concerns

As more machines are able to pass a self-sentience test, we begin to wonder whether or not artificial intelligence should be given a “personhood” status. If machines can think and make decision on their own will, should we consider giving them robot rights? Recently the European Parliament has drafted a set of regulations to ensure the ethical development of artificial intelligence. However, these regulations are still subject to many grey areas and controversies as long as autonomous robots are still being developed and researched.

  • twitter

©2017 BY HCDE 300 PROJECT. PROUDLY CREATED WITH WIX.COM

bottom of page