Study Reveals People Become Discouraged by Competent Robots in Competition

In a competition, it does not matter whether one wins or loses; but how hard the robot is functioning.

(Image credit: Cornell Brand Communications)

A research team led by Cornell University has discovered that when robots are beating humans in competitions for cash prizes, people consider themselves less capable and expend marginally less effort—and they are inclined to form a loathing for the robots.

The research paper, “Monetary-Incentive Competition Between Humans and Robots: Experimental Results,” for the first time, brought together roboticists and behavioral economists to investigate how the performance of a robot influences humans’ behavior and reactions when they are competing against each other at the same time.

Their insights confirmed behavioral economists’ theories about loss aversion, which predicts that people will not try as hard when their competitors are performing better, and proposes how workplaces might enhance groups of people and robots working together.

“Humans and machines already share many workplaces, sometimes working on similar or even identical tasks,” said Guy Hoffman, assistant professor in the Sibley School of Mechanical and Aerospace Engineering. Hoffman and Ori Heffetz, associate professor of economics in the Samuel Curtis Johnson Graduate School of Management, are the study’s senior authors, which was presented on March 11th at the ACM/IEEE International Conference on Human-Robot Interaction in Daegu, South Korea.

“Think about a cashier working side-by-side with an automatic check-out machine, or someone operating a forklift in a warehouse, which also employs delivery robots driving right next to them,” Hoffman said. “While it may be tempting to design such robots for optimal productivity, engineers and managers need to take into consideration how the robots’ performance may affect the human workers’ effort and attitudes toward the robot and even toward themselves. Our research is the first that specifically sheds light on these effects.”

Alap Kshirsagar, a doctoral student in mechanical engineering, is the first author of the paper. Bnaya Dreyfuss and Guy Ishai, economics graduate students at the Hebrew University of Jerusalem, also contributed.

In the research, humans contested against a robot in a monotonous task—counting the number of times the letter “G” appears in a sequence of characters, and then putting a block in the bin matching the number of occurrences. The person’s chance of winning each round was defined by a lottery based on the difference between the robot’s and human’s scores: If their scores were identical, the human had a 50 percent chance of winning the prize, and that probability increased or dropped based on which participant was performing better.

To make sure contestants were conscious of the stakes, the screen showed their chance of winning throughout.

For the behavioral economists, the research gave an opportunity to test theories about loss aversion in an organized setting; the effort of two humans in competition cannot be manipulated, but a robot’s effort can. It also revealed how loss aversion might influence humans’ effort in a real-time competition, which had not been studied earlier.

“The beauty of this project is that it is the birth of a true collaboration across engineering and economics—one of the things Cornell is good at,” said Heffetz, who is also an associate professor of economics at the Hebrew University. “We tried to find questions that interest both crowds, and then we tried to design an experiment that gets the economics right, and is feasible from a human-robot interaction point of view.”

After every round, participants had to fill out a questionnaire rating the robot’s capability, their own competence, and the robot’s likability. The scientists learned that as the robot did better, people rated its competence higher, its likability lower, and their own capability lower.

We were surprised that people found themselves less competent against a fast, competitive robot, even though there’s no direct interaction. The robot is doing its own work, you’re doing your own work.

Alap Kshirsagar, Study First Author and Doctoral Student in Mechanical Engineering, Sibley School of Mechanical and Aerospace Engineering at Cornell University.

A majority of the participants did not appear to anthropomorphize the robot, with comments including, “I sort of realized, I am just competing with an idea of mechanization, and the arm is just a prop to signify it”; though one participant wrote, “It was obvious when the robot was going easy on me.” Actually, the robot’s efforts differed by round but did not vary within each round.

Scientists were astonished that the value of the cash prize did not seem to greatly influence people’s efforts, though earlier experiments indicated people would work harder as the value increased.

The team plans to investigate the reason for that in upcoming work, but said participants may have been so absorbed on winning they did not care about the real prize value.

The study was partially supported by the Israel Science Foundation.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.