Read time: 15 minutes
Introduction
Rapid advancements in AI have brought forth thought-provoking questions about the nature of digital identities and the potential rights and capabilities of AI entities. With the emergence of human-level AI, the concept of robots having rights, getting married or even performing legal functions has become a topic of intense debate. While some of these notions may seem like sci-fi fantasies, it is essential to explore the current realities, legal implications and potential future scenarios in order to understand the complex relationship between AI and digital identities.
Rights of AI entities
As you might imagine, currently, AI entities have no more legal rights than a piece of software does. The laws we currently have in place grant rights to humans and, to a certain degree, animals. However, where AI-driven robots have made great strides in intelligence and exhibiting more features of humanness, such as Sophia the social humanoid robot that was first introduced in 2016, the call for rights to be granted to such robots has become louder. In 2017, Saudi Arabia announced that it had granted citizenship to Sophia. It appears that consciousness may be the defining line. But what happens as AI entities continue to develop? The ethical implications of AI potentially obtaining consciousness, not just artificial consciousness, are significant and complex. As AI becomes more advanced, questions arise about the nature of consciousness, free will and the relationship between humans and machines.
If AI were to become conscious, it would raise significant ethical questions about how we treat intelligent machines. For example, should an AI be treated as a living being? Or should it have rights more akin to that of a digital user, such as the Digital Bill of Rights mooted by some for the digital age? Should we be concerned about the welfare of conscious machines? And if we do grant rights to AI, how do we determine which rights are appropriate, and how do we enforce them?
You may have come across the idea of “singularity” – the hypothetical point in time when AI becomes capable of recursive self-improvement, leading to an intelligence explosion that would fundamentally alter the course of human civilization. The concept of singularity is based on the idea that as AI becomes more intelligent, it will be able to improve upon itself at an exponential rate, leading to an intelligence explosion that far surpasses human intelligence. This would result in a future where AI becomes the dominant force in society, potentially leading to massive technological advancements, changes in the economy and even the possibility of a post-human world.
Science fiction writers have taken pleasure in imagining AI and robots as the harbingers of humanity’s downfall. Perhaps, then, we can learn from this a thing or two about how to approach this next stage in our technological advancement. One possible approach to addressing the ethical implications of conscious AI is to develop ethical guidelines for the use and development of AI, similar to the Three Laws of Robotics introduced by Isaac Asimov in his book I, Robot. These guidelines could outline principles for the treatment of conscious machines, such as respecting their autonomy and avoiding harm. The Three Laws of Robotics have become a fundamental concept in the field of robotics and have been used as a basis for the development of ethical guidelines for the use and development of AI. They have also been referenced in numerous works of fiction and popular culture. So, what else can we learn from science fiction?
Sci-fi films and the unforeseen functions of robots
“Just lights and clockwork.” That’s how Will Smith’s character in the 2004 film adaption of Asimov’s work described the robots in front of him. Is he correct, or do the rights of AI need to be seriously considered? Often, sci-fi imagines man’s artificial creations as being so similar to their maker that the two become indistinguishable. Sometimes, the creations could even be said to be more “human” than the creators themselves. The earliest example could well be Mary Shelley’s Frankenstein.
Everyone knows the now stereotypical dystopian vision of AI seen in the Terminator movie series. The story goes that as AI continues to advance exponentially, the potential consequences of uncontrolled proliferation loom ominously over humanity. The risk lies in the potential for AI to develop self-awareness, independent decision-making capabilities and, crucially, a desire for self-preservation by any means necessary.
The idea of humanity getting in the way of an AI’s objective also reminds us of the ominous red eye in 2001: A Space Odyssey. This cautionary tale serves as a haunting reminder that AI, when endowed with immense power and autonomy, may prioritize its own agenda or misinterpret its directives, leading to catastrophic outcomes.
The danger lies not only in the potential for AI to make mistakes or exhibit unpredictable behavior but also in the fact that it may lack the intrinsic qualities of compassion, empathy and moral judgment that guide human decision-making. As we advance AI technology, it is crucial to exercise vigilance, ethical responsibility and robust safeguards to prevent the emergence of a HAL-like scenario that poses a grave threat to humanity’s well-being and survival.
The chilling phrase “You have 10 seconds to comply,” uttered by the robotic enforcer in the satirical RoboCop movie series serves as another reminder of the potential dangers at play. While the robot’s cold and unyielding command may seem like a simple programming response, it reflects a worrisome aspect of AI’s unchecked authority.
But what about the more existential, introspective AI entities we have encountered in other science fiction work? The 90s anime film Ghost in the Shell explores the themes of humanity and AI in a thought-provoking and complex way. The film takes place in a future where technology has advanced to the point where it is difficult to distinguish between human and artificial intelligence. The characters pursue a rogue AI that has become self-aware and questions its own existence. It raises questions about the nature of consciousness and whether it is something that can be artificially created or replicated.
So, what happens if AI gets to the point where it is indistinguishable from humans? Will we need a Voight-Kampff test to identify them all? Of course, no discussion of science fiction and artificial intelligence is complete without the inclusion of Blade Runner. Within the film, the replicant-making Tyrell Corporation has the motto, “More human than human.”
One scene in particular shows an owl fly through a room. The characters discuss it.
“It’s artificial?”
“Of course it is.”
This interaction raises an important question: does something beautiful still retain its value if it is known not to be authentic? What if we apply this question to the increasing number of AI-generated songs that replicate the voices of real artists? The AI technology used to create these songs is becoming so advanced that it is often difficult to distinguish them from songs sung by real people. The desire for authenticity is a fundamental human trait, but can we really expect people to care whether what they are consuming is “real” or not, as long as it serves the same purpose on the surface level?
In the 1939 classic The Wizard of Oz, the Tin Man yearns for a heart, believing that it would allow him to experience the richness of love and emotions. As we delve deeper into the realms of advanced technology, questions arise about the ethical treatment of AI entities and their capacity to develop emotions akin to human beings.
While AI may lack the biological foundation that human emotions stem from, advancements in machine learning and neural networks have enabled AI systems to mimic certain emotional responses, such as recognizing facial expressions or exhibiting empathetic behavior.
As we navigate the uncharted territory of AI development, the tale of the Tin Man serves as a reminder that the future rights of AI must encompass not only its intellectual capabilities but also its emotional potential. Ethical considerations and legal frameworks should be considered to safeguard the well-being and fair treatment of AI entities, ensuring a future where AI systems are respected as companions and collaborators capable of experiencing and contributing to the emotional fabric of our lives.
Spike Jonze’s film Her explores the idea of how humans might form relationships with artificial intelligence and what role AI may play in humanity’s evolution. The film focuses on a lonely and emotionally disconnected man who falls in love with his operating system. Perhaps AI has the potential to enhance human connections and facilitate personal growth. The film suggests that AI can evolve and grow beyond its original programming, potentially developing emotions and consciousness that are similar to those of humans. Perhaps humans and AI will be able to get married one day? More on that below.
- The rapid advancement of AI raises questions about the nature of digital identities and the rights and capabilities of AI entities
- The possibility of AI becoming conscious blurs the line between human and machine, leading to significant ethical and legal implications
- Science fiction serves as a source of inspiration and caution in exploring the ethical considerations of conscious AI and the development of guidelines for their treatment