Engineers Develop AI-Based Technology to Limit the Spread of Pathogens

Engineers from the University of Cambridge have developed a patented technology called “predictive touch,” as part of a study collaboration with Jaguar Land Rover.

Image Credit: Shutterstock.com/ TippaPatt

The new technology employs a combination of sensor technology and artificial intelligence to foresee a user’s planned target on touchscreens and other control panels or interactive displays, choosing the right item even before the hand of the user reaches the display.

Touchscreen technology has been integrated into an increasing number of passenger cars to manage navigation, entertainment, or temperature control systems. But users can usually overlook the right item—for instance, due to vibrations or acceleration caused by road conditions—and have to choose again. This means users’ focus is taken off the road, which can raise the risk of accidents.

In road-based trials, laboratory-based tests, and driving simulators, the predictive touch technology decreased the time and interaction effort by as much as 50% because of its potential to foresee the user’s planned target with excellent precision early in the pointing task.

With lockdown restrictions easing across the world, the predictive touch technology could also prove handy in post-COVID-19 days, stated the scientists.

Touchscreens are used for performing many day-to-day consumer transactions: ticketing at cinemas or railway stations, check-in kiosks at airports, ATMs, self-service checkouts in supermarkets, and also several manufacturing and industrial applications.

Avoiding the need to actually touch interactive displays, including touchscreen, can possibly reduce the danger of spreading microorganisms—like influenza, the common cold, or even the current coronavirus—from surfaces.

Moreover, the predictive touch technology could even be integrated into smartphones, and may prove handy while jogging or walking, enabling users to precisely and easily choose the right items without making any physical contact.

The technology also works in circumstances like a moving vehicle on a rough road, or if the user is suffering from a motor disability, like cerebral palsy or Parkinson’s disease, that causes abrupt hand jerks or a tremor.

Touchscreens and other interactive displays are something most people use multiple times per day, but they can be difficult to use while in motion, whether that’s driving a car or changing the music on your phone while you’re running. We also know that certain pathogens can be transmitted via surfaces, so this technology could help reduce the risk for that type of transmission.

Simon Godsill, Study Lead and Professor, Department of Engineering, University of Cambridge

The predictive touch technology employs machine intelligence to establish the item that users would choose on the screen early in the pointing task, thus expediting the interaction.

The technology also employs a gesture tracker, such as RF-based or vision-based sensors, which are used more and more in consumer electronics; contextual data like environmental conditions, interface design, user profile; and information available from other types of sensors, like an eye-gaze tracker, to deduce the user’s intent in real time.

This technology also offers us the chance to make vehicles safer by reducing the cognitive load on drivers and increasing the amount of time they can spend focused on the road ahead. This is a key part of our Destination Zero journey.

Lee Skrypchuk, Human Machine Interface Technical Specialist, Jaguar Land Rover

The technology may also be integrated into displays that lack a physical surface like 2D or 3D holograms or projections. It also provides extra design flexibilities and supports inclusive design practices, since the functions of the interface can be flawlessly personalized for specified users and also the location or size of the display is no longer limited by the user’s potential to reach-touch.

Our technology has numerous advantages over more basic mid-air interaction techniques or conventional gesture recognition, because it supports intuitive interactions with legacy interface designs and doesn’t require any learning on the part of the user.

Dr Bashar Ahmad, Senior Research Associate, University of Cambridge

Dr Ahmad headed the development of the underlying algorithms and the technology with Professor Godsill.

It fundamentally relies on the system to predict what the user intends and can be incorporated into both new and existing touchscreens and other interactive display technologies,” Dr Ahmad concluded.

The software-based solution meant for contactless communications has reached the levels of high technology readiness and can be easily incorporated into present-day interactive displays and touchscreens, as long as the right sensory data is available to promote the machine learning algorithm.

Can Touchless screens prevent future epidemics and car accidents?

Video Credit: University of Cambridge.

Source: https://www.cam.ac.uk/

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.