MIT Alumni Launch Coactive, an AI Platform Tackling Unstructured Visual Data for Businesses

An AI platform developed by MIT alumni is helping businesses unlock the value of unstructured visual data by automating tagging, search, and analytics across images, video, and audio.

Data mining, artificial intelligence, machine learning.

Image Credit: NicoElNino/Shutterstock.com

Coactive is an AI-powered platform founded by MIT alumni Cody Coleman and William Gaviria Rojas. It is built to help businesses make sense of unstructured visual data—images, video, and audio—through automated tagging, search, and analytics.

By addressing the limitations of manual processing, the platform is already proving useful to media and retail companies looking to improve content delivery, moderate explicit material, and extract behavioral insights. The founders envision a future where AI enhances human-machine collaboration, reshaping how enterprises interact with multimodal data.

The Challenge: Making Sense of Visual Data

Although today’s businesses thrive on data, most still struggle to make use of visual content. In fact, unstructured data—including images, videos, and audio—makes up an estimated 80–90 % of global data, yet remains largely inaccessible due to the limitations of manual processing. Traditional tagging is slow, labor-intensive, and prone to inconsistency.

Coactive addresses this gap by automating the analysis of visual inputs. The idea grew from the founders' backgrounds in electrical engineering and computer science at MIT, combined with experiences in projects like OpenCourseWare and advanced AI research at Stanford. Their goal was to help machines "see" and understand images the way people do.

What sets Coactive apart is its model-agnostic framework. Rather than being locked into a single AI model, the platform evolves alongside new technologies, applying the best available tools to extract metadata, organize content, and surface insights. Early users like Reuters and Fandom are already seeing the benefits, from faster newsroom image searches to real-time moderation of community content.

How Coactive Works

Think of Coactive as an operating system for unstructured data. It brings together search, organization, and analytics into one cohesive toolset. Instead of requiring manual metadata entry, Coactive’s AI analyzes visual content directly, recognizing objects, scenes, and contextual cues.

For Reuters, this means journalists can now find the right image by simply typing a natural-language query like “climate protest in Europe.” Fandom, on the other hand, uses Coactive to instantly detect and flag content that violates community guidelines—a task that previously took days.

Because the system is model-agnostic, it can easily incorporate the latest advances in AI. Its semantic search interprets complex queries like “happy customers outdoors” and returns images that reflect the intended meaning, not just literal keywords. Meanwhile, the platform auto-generates metadata tags—emotions, objects, settings—making media archives more discoverable. On top of that, built-in analytics help teams track how visuals influence user engagement, uncovering patterns that inform future content strategies.

A Shift in How Humans and Machines Interact

Beyond its technical features, Coactive reflects a larger shift in human-computer interaction. For decades, users had to conform to the limitations of machines—typing commands, coding rules, and rigid interfaces. With AI-powered platforms like Coactive, that is changing. Now, people can engage with systems through natural language and visuals, making technology more intuitive and accessible.

This evolution is already visible in the field: journalists describe what they’re looking for instead of sorting through folders, and content moderators rely on AI to handle high-volume uploads. But the potential stretches further, from analyzing customer-uploaded images in retail to curating dynamic video lessons in education.

What ties it all together is the founders' mission to make cutting-edge AI usable for organizations without in-house expertise. Coleman compares the shift to the early days of the “big data” movement—only now, it’s about making sense of images, video, and sound. While challenges like AI transparency remain, Coactive’s success suggests a promising path forward: one where human insight and machine scalability go hand in hand.

Conclusion

Coactive is helping redefine how businesses interact with visual data. By automating the heavy lifting, it turns unstructured content into a valuable, actionable asset. Whether used to streamline media workflows or reveal customer behavior patterns, the platform shows how AI can amplify human decision-making while making complex data more accessible.

As more industries look for ways to work smarter with multimodal content, tools like Coactive may become essential bridges between digital systems and real-world understanding.

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Nandi, Soham. (2025, June 20). MIT Alumni Launch Coactive, an AI Platform Tackling Unstructured Visual Data for Businesses. AZoRobotics. Retrieved on June 20, 2025 from https://www.azorobotics.com/News.aspx?newsID=16054.

  • MLA

    Nandi, Soham. "MIT Alumni Launch Coactive, an AI Platform Tackling Unstructured Visual Data for Businesses". AZoRobotics. 20 June 2025. <https://www.azorobotics.com/News.aspx?newsID=16054>.

  • Chicago

    Nandi, Soham. "MIT Alumni Launch Coactive, an AI Platform Tackling Unstructured Visual Data for Businesses". AZoRobotics. https://www.azorobotics.com/News.aspx?newsID=16054. (accessed June 20, 2025).

  • Harvard

    Nandi, Soham. 2025. MIT Alumni Launch Coactive, an AI Platform Tackling Unstructured Visual Data for Businesses. AZoRobotics, viewed 20 June 2025, https://www.azorobotics.com/News.aspx?newsID=16054.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.