CultureAI Tackles GenAI Data Leaks With New Solution

CultureAI, a leading Human Risk Management Platform, has today announced the launch of its innovative Generative AI solution. This will enable security teams to monitor and flag employees’ use of generative AI in the workplace to promptly identify instances of sensitive data being shared with these tools and minimize the risk of sensitive data loss. 

The risk of accidental data exposure by generative AI (GenAI) is a growing concern as the number of employees using these tools continues to increase, with limited visibility into the information being shared on these platforms. At the same time, many businesses are trying to strike a balance between the need for innovation and to minimize security risks.  

GenAI tools like ChatGPT and Bard are extremely popular and offer significant growth opportunities for companies, however, unchecked usage poses significant risks for organizations,” says James Moore, Founder and CEO, at CultureAI. “Without visibility of how employees are using AI tools, organizations cannot implement the real-time coaching required to help employees harness the power of these tools safely and effectively”. 

CultureAI is the first human risk management provider to offer real-time visibility into the accidental disclosure of sensitive data or misuse of GenAI tools, along with offering tailored coaching in response. To minimize friction for employees, the solution only flags a risk when sensitive data such as personally identifiable information (PII) is copied into GenAI applications. The solution can also track if employees are logging into GenAI apps with corporate credentials. 

The solution uses pattern detection (sequences of characters), in addition to specific words or phrases, indicating that confidential information has been posted on GenAI platforms. Organizations can set up and monitor their own unique patterns and/or terms or use out-of-the-box matched patterns created by CultureAI such as tax codes or national insurance numbers. These are regularly reviewed for accuracy and organizations can also weigh them by how concerned they are: high, medium, and low.   

When it comes to reporting, thanks to the Generative AI solution, security teams have immediate visibility of when and where an employee submits PII and other confidential data to an AI tool. At this point, an open risk will appear on the CultureAI Human Risk Dashboard which can be triaged.  

CultureAI’s Generative AI solution will help organizations gain visibility of any risks and enable them to orchestrate appropriate coaching and interventions, providing several key benefits: 

  • Real-time employee education: Just-in-time education delivers targeted guidance or training to employees precisely when they need it, enhancing their ability to safely utilise AI tools. 
  • Risk reduction: Targeted Coaching significantly reduces the probability of accidental disclosure of sensitive information over time, reducing the risk of security breaches. 
  • Comprehensive reporting: Easily track and analyze behavior changes over time through easily digestible and shareable analytics and reporting tools. 
  • Compliance: The solution aids in upholding compliance with data protection regulations and standards in the workplace.  

Organizations can monitor GenAI applications such as ChatGPT, Bard, and Bing through Microsoft Edge and/or Google Chrome extensions. If customers are already utilising these extensions with CultureAI, they can enable the Generative AI solution with a click of a button. 


Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.