Setting a New Standard in AI Safety: Aporia's Guardrails for Multimodal AI

Aporia, the leading AI control platform, today announced the launch of Aporia Guardrails for Multimodal AI Applications. The first-of-its-kind solution mitigates issues in video and audio-based AI applications such as hallucinations, wrong responses, compliance violations, and jailbreak attempts. This groundbreaking launch is setting a new standard when it comes to addressing the unique challenges of the rapidly growing generative AI landscape.

OpenAI’s recent launch of GPT-4o, a multimodal AI application that can handle any combination of text, audio, video, and image, is its most sophisticated model yet, and is expected to transform human-AI interactions across industries and in daily life. However, while GPT-4o provides unprecedented productivity with the richest, most human-like AI experience to date, it also comes with a big issue of accountability. One misstep of misinformation spoken to users could have serious implications – such as someone seeking ways to deal with depression, and AI advising drug and alcohol abuse, or a banking customer asking to see their financial history, only to receive someone else’s data.

Aporia’s newest Guardrails for Multimodal AI provides engineers with the ability to add a much-needed layer of security and control between the app and the user. These guardrails operate with a defined, fully customizable set of behavioral rules, that work at sub-second latency. This fully managed, low-maintenance solution goes beyond what the common prompt engineering can do in just minutes of set up.

Multimodal AI is a game-changer for the world we live in, but one that requires guardrails to ensure its safety, success, and ultimate adoption,” said Liran Hason, CEO and Co-Founder of Aporia. “Industries across the globe are coming to rely on AI, yet as many engineers are discovering, AI by itself is inherently unreliable. Our team has been working around the clock to provide the first-ever guardrail solution that will hugely enhance the safety and reliability of multimodal AI. Customer service agents are quickly being replaced with AI, but imagine what would happen without the human element in between AI and the end-user? As we have seen before, disastrous accidents can occur quickly. Aporia Guardrails are the first solution to actively mitigate spoken and written responses in real time and support the human in the loop.”

Aporia’s Guardrails for Multimodal AI can detect and mitigate 94% of hallucinations before they reach users in real-time, offering a powerful layer between LLMs and AI applications. The solution also prevents the misuse of applications for malicious purposes such as prompt injections or prompt leakage, which can lead to the exposure of sensitive information. Guardrails also prevent explicit and offensive language in user interactions, identifying inappropriate wording and phrasing to immediately block it.

With the release of multimodal applications, we knew we had to create a solution to protect emerging AI apps,” said Alon Gubkin, Co-Founder and CTO of Aporia. “At Aporia, we believe continuous research into risk and prevention measures must go hand-in-hand with AI development. Keeping AI safe is our main objective, which is why we are committed to developing solutions, like our Guardrails for Multimodal AI Applications, that allow AI engineers to reap all the benefits of this world-changing technology.”

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.