Posted in | News | Machining Robotics

New Software to Verify How Much Information Could be Framed by AI

At the University of Surrey, scientists have engineered software due to an increasing interest in generative artificial intelligence (AI) systems throughout the world.

Image Credit: University of Surrey

The newly developed software could confirm how much data an AI farmed from a digital database of the organization.

The verification software from Surrey could be utilized as a part of an online security protocol of the Company. This helps an organization gain better insights into if an AI has learned too much or even accessed sensitive data.

Also, the software has the potential to determine if AI has identified and is capable of utilizing flaws in software code. For instance, in an online gaming context, it could find if an AI has learned to often win in online poker by exploiting a coding fault.

In many applications, AI systems interact with each other or with humans, such as self-driving cars in a highway or hospital robots. Working out what an intelligent AI data system knows is an ongoing problem which we have taken years to find a working solution for.

Dr. Solofomampionona Fortunat Rajaona, Study Lead Author and Research Fellow, Formal Verification of Privacy, University of Surrey

Rajaona added, “Our verification software can deduce how much AI can learn from their interaction, whether they have enough knowledge that enable successful cooperation, and whether they have too much knowledge that will break privacy. Through the ability to verify what AI has learned, we can give organizations the confidence to safely unleash the power of AI into secure settings.”

At the 25th International Symposium on Formal Methods, the study regarding Surrey’s software won the best paper award.

Over the past few months, there has been a huge surge of public and industry interest in generative AI models fuelled by advances in large language models such as ChatGPT.

Adrian Hilton, Professor and Director, Institute for People-Centred AI, University of Surrey

Creation of tools that can verify the performance of generative AI is essential to underpin their safe and responsible deployment. This research is an important step towards is an important step towards maintaining the privacy and integrity of datasets used in training.”

Source: https://www.surrey.ac.uk/

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.