Study Shows Perceived Presence of Key Capacities Could Make People More Likely to Hold Robots Morally Responsible

A pedestrian in Tempe, Arizona was struck and killed by a self-driven car last year. Now, Arizona and the city of Tempe are being sued for negligence by the family of the victim.

However, in an article reported in the journal Trends in Cognitive Sciences on April 5th, 2019, computer and cognitive researchers ask at what point public will begin to hold robots and other self-driven vehicles responsible for their own actions—and whether it is justified to blame them for wrongdoing.

We’re on the verge of a technological and social revolution in which autonomous machines will replace humans in the workplace, on the roads, and in our homes. When these robots inevitably do something to harm humans, how will people react? We need to figure this out now, while regulations and laws are still being formed.

Yochanan Bigman, University of North Carolina, Chapel Hill

The new article investigates how the moral mind of humans is likely to reach a decision on robot responsibility.  The presence, or perceived presence, of some major capacities can make people more liable to hold a machine morally responsible, argued the authors.

Such capacities include autonomy, that is, the potential to act without any input from humans. Another thing that matters is the appearance of a robot because if a robot looks more humanlike, then people are more likely to ascribe a human mind to it. Other aspects that can make people to view robots as having “minds of their own” constitute an awareness of the situations they find themselves in and also the potential to act freely and with purpose.

Issues like that hold significant implications for people during their interactions with robots. They are also vital considerations for the companies and people who develop and use autonomous machines. The authors further contend that there can be situations where robots taking the onus for harm inflicted to humans could protect the companies and people who are eventually responsible for programming and directing such robots.

As the technology goes on to progress, other interesting questions would be there to consider, such as whether robots should have rights, amongst others. Already, a 2017 European Union report and the American Society for the Prevention of Cruelty to Robots have argued for extending specific moral protections to autonomous machines, the authors noted.  The authors further clarified that such arguments usually focus around the effect machine rights would have on humans, since extending the moral circle to include machines might serve to safeguard people in certain cases.

Although robot morality may still look like science fiction stuffs, that is exactly why it is vital to ask such questions now, stated the authors.

We suggest that now—while machines and our intuitions about them are still in flux—is the best time to systematically explore questions of robot morality,” they wrote. “By understanding how human minds make sense of morality, and how we perceive the mind of machines, we can help society think more clearly about the impending rise of robots and help roboticists understand how their creations are likely to be received.”

As the previous experience in Tempe underscores, humans are already sharing hospitals, roads, and skies with autonomous machines. Unavoidably, an increasingly number of people will get hurt. How robots’ ability for moral responsibility is inferred will have significant implications for public policy decisions in real world. Such decisions will aid in molding a future, wherein people may progressively coexist with ever more advanced, decision-making machines.

The National Science Foundation and a grant from the Charles Koch Foundation supported the study.

Source: http://www.cellpress.com

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.