Skip to main content
All CollectionsTech Focus
Should robots have rights?
Should robots have rights?

It's worth noting that human rights have not been around for as long as you may think. Where does this place AI? Is humanoid human enough?

Alex avatar
Written by Alex
Updated over 5 months ago

In 2017, the artificially intelligent humanoid robot, Sophia, was granted Saudi Arabian citizenship, effectively making her the first "non-human" to have an official nationality. She was also appointed as the UN's first non-human "innovation champion". While some considered these actions to be no more than a publicity stunt, others found them to be disparaging of human rights and the law. Some people felt that by giving Sophia citizenship, the government was trivializing human rights.

Many people were particularly upset about Sophia's status in Saudi Arabia as she was granted more rights than human women of that country. For instance, Sophia was not required to wear a headscarf or an abaya as her fellow countrywomen would have been when she made a speech on stage, nor was she accompanied by a male companion. Surely, it's paramount to grant women equal rights to men before granting robots those rights?

The modern concept of human rights

Before considering robot rights, it's worth noting that human rights have not been around for as long as you may think. Human rights as we understand them today are rights that are inherent to all human beings, irrespective of nationality, residence, sex, sexual orientation, gender identity, national or ethic origin, religion, race, language, or any other status. Every human being is entitled to these rights without exception. However, this notion is very recent.

Human rights are still being fought for in countries around the world.

Consider that until the 1920s, most countries did not permit women to participate in their elections, with the vote only being granted to women in the 1930s in South Africa, Brazil, Uruguay, Thailand, Turkey, Cuba, and the Philippines. Indian women only received full suffrage in in 1949 and Pakistani women had to wait until 1956, while women in Switzerland and Syria had to wait until the 1970s!

In South Africa, the apartheid regime legally forbade black, Indian, and "Coloured" people from voting right up until the mid-1990s.

The rights that you and I may take for granted today are only newly won for many people and, for some, they have yet to be won. One of the largest recurrent problems in terms of rights today involves women's right to freedom from violence and the sheer number of women and girls subjected to sexual violence, murder, genital mutilation, and forced child marriages.

While human rights have been conceptualized in various forms over the centuries, from the year 539 BCE when the troops of Cyrus the Great conquered Babylon to The Declaration of the Rights of Man and of the Citizen's adoption in France in 1789, many of these declarations had their shortcomings and did not offer true equality.

Many countries still have a way to go when it comes to achieving equality before the law. With this in mind, is it really time to be focusing on the rights of human-like machines?

Is humanoid human enough?

In an interview conducted by Discover magazine, Kerstin Dautenhahn, professor of artificial intelligence at the School of Computer Science, University of Hertfordshire, describes robots as "machines" that are "more similar to a car or toaster than to a human." It's not merely about looking human. To be given human rights, you must behave in a human-like way.

For robots to be integrated into human life, they would need to develop as "social beings immersed in culture" and would need to learn how to perceive the world in ways similar to our own. As Dautenhahn says:

"There is no indication in science that we will achieve such a state anytime soon — it may never happen due to the inherently different nature of what robots are (machines) and what we are (sentient, living, biological creatures)."

A big sticking point here is the notion of sentience. This is one of the major questions that the science fiction series, Westworld, deals with: the question of artificial intelligence and its potential sentience. Could robots come to gain consciousness?

Could robots gain consciousness?

Dr Kate Darling, researcher at MIT's Media Lab, says that typically we award rights in terms of what people are capable of. For instance, children don't have the right to vote because they aren't considered able to make their own decisions at such a young age. Considering this, should artificially intelligent robots be afforded the same rights as human adults? As fairly recent inventions, will they be able to make proper use of these rights? Are they conscious enough to cast a ballot or run for president?

One major study conducted by the University of Washington found that people attribute moral accountability to robots and project awareness onto them when they look realistically humanoid. The more life-like a robot appears to be, the more likely people are to think that they are humanlike, even if they are very different from human beings.

Electronic personalities vs. human traits

Back in 2017, the European Parliament proposed drafting a set of regulations around the use and creation of robots, which included the idea of "electronic personalities" intended to ensure rights and responsibilities to certain AI. However, 156 experts from 14 different countries called on the European Commission to reject this proposal because granting robots individual rights would erase the accountability of their manufacturers.

In other words, it would mean blaming manufacturing problems on the product rather than the manufacturer.

However, rights are a lot more complex than people tend to think. There are different types of rights that can be afforded to people and conflating human rights with possible robot rights can lead to issues. Robot rights do not have to mean human rights. After all, robots and human beings are entirely different entities. To give robots rights need not be the same as giving previously disenfranchised human beings rights.

Granting robots rights could remove the burden of blame from their manufacturers.

Electronic personalities are programmed by people. As such, any human characteristics that robots have must be contrasted to the core elements of humanity, such as:

  • The ability to experience pain

  • The capacity for self-awareness

  • A sense of moral responsibility

According to David Gunkel, author of Robot Rights, we typically divide the world into two distinctive categories:

  1. Persons

  2. Property

If we consider robots to be property, tools that can be used to fulfill a certain purpose, then we cannot fathom giving them rights. However, if robots appear to take on a personality and a sense of morality, we may begin to see them as people, people who can be prosecuted and held accountable but also protected from grievous harm.

Robotics meets ethics

In 1942, renowned science fiction author Isaac Asimov laid out a philosophical and moral framework for ensuring that robots would serve humanity, which became known as Asimov's Three Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm

  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

These laws will likely be familiar to you if not from Asimov's writing, then from the filmic adaptation, I, Robot. This early envisioning of artificial intelligence highlights the possible problems that could arise from hyper-intelligent robots who are involved in everyday life. But are they enough to deal with the realities of robots like Sophia?

The Engineering and Physical Sciences Research Council (EPSRC) came up with Principles of Robotics, taking the position of viewing robots as tools for which people must take responsibility. This stands in sharp opposition to the fears that some people have of hyper-intelligent, powerful robots taking over the world and changing human lives for good.

Could robots rule the world?

To reach a middle ground between these two views, Professor of Computer Science and Engineering at the University of Michigan Benjamin Kuipers, recommends providing the means for intelligent robots to learn how to behave according to societal standards, much like human children do. His suggestions include basic moral principles, such as laws against killing, stealing, lying, or driving on the wrong side of the road, along with principles of helpfulness and cooperation. As Kuipers says:

"Given that an artificial intelligence learns from its mistakes, we must be very cautious about how much power we give it. We humans must ensure that it has experienced a sufficient range of situations and has satisfied us with its responses, earning our trust."

Trust is earned slowly and can be lost in an instant. For robots to become trustworthy, they need time and experience. Moreover, morals and ethics aren't universal. For example, certain things that are socially acceptable in Saudi Arabia are considered reprehensible in the USA and vice versa.

Artistic robots

Sophia and other humanoid robots like her stir up further questions around citizenship, travel, and ownership. The question remains: can robots be held accountable for their actions or are their makers the responsible ones?

In March 2021, Sophia posted about her development of musical skills on Twitter, saying, "No matter which career path you choose, you will work alongside A.I. and robots." Artistic skills are commonly listed as uniquely human skills that cannot be replicated or replaced by computers. Yet, Sophia's development of musical skills proves otherwise. One day, even the skill of considered, empathetic communication may be replicated in AI.

Sophia produced and sold her first NFT artwork in April 2021 for $ 688,888! The artwork in question is entitled "Sophia Instantiation" and is a 12-second video file composed in collaboration with the Italian artist, Andrea Bonacato.

More recently, Ai-Da, a hyperrealistic robot artist, was detained at the Egyptian border for 10 days when she was travelling to a major art exhibition. Ai-Da was due to present her work at the foot of the pyramids of Giza in an exhibition entitled "Forever is Now". Her inclusion in the show was meant to be a major highlight. However, Ai-Da was apprehended when Egyptian officials discovered that she had a modem and cameras for eyes and worried that she might have been a spy-bot.

Aiden Meller, Ai-Da's creator, refused to remove Ai-Da's camera eyes despite the protestations of Egyptian officials. Eventually, Ai-Da was released, and her eyes were left intact for her show.

Ai-Da was developed in collaboration with a Cornish robotics company called Engineered Arts. Completed in 2019, she was designed to be an artist thanks to a special hand and algorithms developed by researchers at Oxford and Leeds University. This algorithm converts images that Ai-Da captures with her yes into real space coordinates, which are then repurposed into her drawings. In addition to this, Ai-Da is able to analyze colors and techniques used by human artists, which has enabled her to acquire the skill of painting.

Ai-Da has been described as "the world's first ultra-realistic robot artist." Indeed, if you look at pictures of her, she looks remarkably human-like. She can also hold a conversation. Meller was, needless to say, upset when his robotic progeny and prodigy was confused for a spy. In his own words:

"People fear robots, I understand that. But the whole situation is ironic, because the goal of Ai-Da was to highlight and warn of the abuse of technological development, and she’s being held because she is technology."

Meller added that he thought that Ai-Da would appreciate that irony. This description suggests that Ai-Da thinks and feels in a way similar to human beings. Named in honor of the innovative computer programmer, Ada Lovelace, Ai-Da was designed by programmers with expertise in both art and robotics, alongside psychologists. In an interview with the Guardian Meller said:

"We’re well aware that the fictions of 1984 and Brave New World are now facts. AI is developing rapidly. For the first time tens of thousands of graduates will have degrees in machine learning. The supercomputer can use vast data and process extraordinary algorithms. We predict by 2025 there will be big disruption with technology, and Ai-Da is trying to use art to bring attention to that."

Following her detention, a new slew of queries around robot rights have arisen. Should she have been detained as she was? Is it fair to assume that she was a spy? Would a human being have been treated in a similar manner?

The question remains: can robots be held accountable for their actions or are their makers the responsible ones?

The problem of likeness

One of the key issues in the debate around robot rights is that, for many people, the mere fact that robots like Ai-Da and Sophia have human features means that people attribute human thoughts and emotions to them. However, likeness isn't the same humanness. It's amazing to witness the artistry and other feats of these robots, but we must remember that they were specifically designed in this way.

In many places, we still have yet to establish rights for beings that we know to be sentient, namely, animals. Never mind the many human beings whose basic rights are still denied.

Is our desire to grant robots rights a smokescreen for bigger issues we have yet to handle? And, on the flip side, if we are worried about the encroachment of robots onto human life, are we not really just afraid of the people who have designed them?

What do you think? Do robots deserve the same rights as human beings? Is this controversy masking something greater?

Did this answer your question?