A recent report on artificial intelligence highlights that researchers could soon enlist help from digital assistants in order to review huge volumes of literature.
Academics at the Universities of Strathclyde and Glasgow conducted successful experiments in which search agents went head-to-head with humans in a computer search challenge.
They were considered to be more effective when compared to human participants, and even though they differed majorly in behavior, they could be configured to provide a realistic and credible simulation of a human researcher.
There’s currently a great deal of discussion about artificial intelligence and the role it could play in the future. An autonomous search agent could be useful for researchers reviewing vast amounts of literature in subjects such as law and medicine. In this type of information-intensive review, it could read through and assess information while the researcher is working on other things, then suggest other sources of information that would be relevant.
Dr Leif Azzopardi, a Senior Lecturer in Strathclyde’s Department of Computer and Information Sciences and a partner in the research
“Previously the simulated users we created were unrealistic and lacking in agency. Their decisions were made stochastically - by the ‘roll of dice’ – rather than based on the actual information found and the underlying need for information.
“The model we have developed takes account of what the autonomous agents knows, has done and has seen, along with what it considers to be relevant. It is constantly evolving.
We conducted a series of experiments where 48 people were given two search tasks to complete; we then set the autonomous agents the same search tasks, under the same conditions, and they significantly outperformed the human searchers.
David Maxwell, a PhD Student from the University of Glasgow’s School of Computing Science
“Our findings are very promising and show that it is possible to create realistic simulations of how humans search. Now we can look to apply this technology to augment the search capabilities of humans to help them process more information and find more relevant material.”
The early findings offer the models and infrastructure that could enable automating the evaluation of search engines, resulting in the development of collaborative search agents, which help information workers and researchers process huge volumes of text.
The work will be presented at the international Conference in Information & Knowledge Management in Indianapolis, October 2016 . “Frontiers and Applications of Big Data” is the theme of this conference.