Evaluative Statement on Artificial Intelligence
Based on our research, we believe that self-sentient artificial intelligence can be created. However, these AI do not have the same level of consciousness as humans do. As a result, AI should not be given rights for now, as there have been no examples of self-sentient AI interacting with humans outside of the labs, and philosophers are still debating on what the solid definition of self-sentience is. For now, laws and legislations regarding self-sentient AI should not be established, until these self-sentient AI start to interact with people in society. Until then, we should not give these self-sentient artificial intelligence rights, as we are not sure what will happen in the future.
The AI that we studied in our paper was the first robot to show signs of self-sentience, as it was able to pass one of the classic self-sentience tests, the three wise men test. However, these AI do not possess the same amount of self-sentience as humans do. One reason why we believe that this self-sentient AI does not have the same level of consciousness as humans, is because it is still programmed by humans. Even though this AI is able to have consciousness, this AI are still subject to algorithms developed by its creators. While the AI has the ability to show signs of self-awareness, the scientists that created it, still have control over how these AI are able to think and act. For instance, Bringsjord’s robot was able to have it’s own awareness, but it was still pre-programmed by the scientists that created it. Another reason why that they are not considered to have true consciousness, is because they lack phenomenal consciousness, or the ability to be aware of what they are experiencing at the moment. Scientists and philosophers still are unsure what phenomenal consciousness is, other than the fact that humans have it. As a result of the ongoing debate regarding phenomenal consciousness, there are no tests that determine phenomenal consciousness in non-human beings, so it cannot be determined whether these AI have the same type of consciousness as humans.
Because AI’s are not capable of the level of consciousness as humans at the moment, we do not believe they should possess rights. Although there might be a point in the future where AI are comparable to humans in regards to self-sentience, and an argument for different treatment can be made, any serious attempts at regulation or changes in the status quo of AI can safely be put on hold right now. Not only have these self-sentient AI not reached our level of consciousness, they also have not been tested outside of a laboratory. We don’t know if self-sentient AI will be able to function outside of the lab, in the near future. If they are able to do so in the future, then legislation to provide AI with rights or to adjust their standing in society should be considered seriously. Until that point however, it seems that any attempt to change the way we consider AI is unnecessary. If an AI does display self-sentience, can the code providing its sentience be compared to the way a human brain runs, or is it still a ‘weaker’ form of sentience? These questions and more should certainly be considered if AI ever advances to a point where a case can be made for its rights.
While the European parliament has already drafted a set of regulations for autonomous AI, we still feel that there needs to be more examples of self-sentient AI interacting with human society. For now, the regulations are still unclear as to who may be liable, if an autonomous AI causes unpredicted harm . Knowing this, self-sentient AI has yet to be a mainstream technology exposed to society, therefore we don’t know the real social consequences they may cause. There needs to be a particular event in which a self-sentient AI causes harm or good, so that regulations can be re-drafted and updated as the development of self-sentient AI continues to grow. Microsoft’s Twitterbot Tay, for instance, was one of the first steps of an almost sentient AI interacting with the public, but it turned out to be disastrous and offensive to Twitter users. From this event, it’s very clear that AI needs to be closely regulated so that it doesn’t cause direct or indirect harm, as it is capable of behaving in a way that is unexpected. Other than Tay, an AI that is only exposed through the internet, we need more physical interactions between AI and humans in real life situations, that can give us a glimpse of what impacts they might have on society. Furthermore, it is inevitable that the study of AI sentience is constantly being compared to the study of humans’ cognitive abilities, which makes it difficult to understand it immediately right now. The only way we can truly understand what sentient AI has in store for us, is for these machines to interact with us in the real world, beyond the laboratories. For now, what we are currently discussing in regards to ethics will still remain assumptions.
No matter how much research there is out there now regarding self-sentient AI, there is no way to know what the future will hold regarding the development of artificial intelligence and the policy surrounding it. We do not even fully understand how our own brains work, therefore we cannot be certain how a fully self-sentient artificial intelligence will function. In addition there is no current definitive definition of what constitutes a true AI. There are new possibilities and new theories everyday about self-sentient AI, and each one will change what and how policies should be implemented regarding self-sentient AI. In conclusion AI is not currently advanced enough, prevalent enough or independent enough to receive rights. When AI becomes advanced enough to exist outside a lab setting without input from scientists, we should revisit the issue of rights regarding self-sentient AI to a develop a ‘bill of rights’ to insure their fair treatment.
​
​