How AI Simulated the Evolution of Eyes and Brains

What if the way animals see the world had developed differently? Researchers at MIT have used artificial intelligence to explore this idea - not through imagination, but through simulation.

extreme macro of insect head

Study: What if eye...? Computationally recreating vision evolution. Image Credit: vasekk/Shutterstock.com

Researchers at the Massachusetts Institute of Technology (MIT) have used artificial intelligence to simulate how eyes might have evolved differently under alternate evolutionary pressures.

By allowing virtual agents to evolve both their eyes and behaviors in response to specific tasks, the team explored how visual systems adapt to solve different problems, offering rare, causal evidence for long-standing theories in vision science.

The results help explain how optical systems balance trade-offs like sharpness versus brightness, how visual acuity scales with brain power, and how different environments can lead to strikingly different eye designs.

In short, the study presents a new way to explore evolutionary "what-ifs" that traditional biology alone can’t answer.

A New Way to Study Eye Evolution

For decades, scientists have studied vision by comparing the wide variety of eyes found in nature, from the compound eyes of insects to the high-acuity eyes of birds and mammals.

These comparisons offer clues about the pressures that shaped them, but they can't test what would have happened under different conditions. Evolution only runs once, and we can’t replay it.

To address this limitation, the MIT team turned to AI.

By simulating evolution in virtual environments, they were able to watch entirely new eye designs emerge from scratch. AI was able to not only model how eyes look, but also evolve them in context, alongside the behaviors that depend on them.

This approach allowed the research team to ask some key questions: What kinds of eyes would evolve if the environment demanded something entirely different? And what happens when you place constraints on the neural resources available to process visual information?

Inside the Simulation: Evolving Eyes and Brains Together

The team developed a two-part computational framework, comprising one loop for learning and another for evolution.

In the learning loop, each AI agent was trained using reinforcement learning to solve a specific visual task, such as navigating a maze or detecting objects. The agent’s brain (a neural network) learned how to interpret the visual input from its eyes and respond accordingly.

In the evolution loop, agents were scored based on how well they performed. Those with better performance were used to generate the next generation of agents. Their digital genomes were divided into three categories:

  • Morphological genes: Controlling eye number, position, and field of view
  • Optical genes: Controlling lens characteristics and aperture size
  • Neural genes: Defining processing capacity and memory

These components evolved independently, allowing the system to explore a vast space of possible eye-and-brain combinations, over 1020 in total.

To keep a sense of realism, the agents perceived their surroundings through a physically based imaging model. Light passed through their evolved eye structures, forming images on a simulated retina. This meant agents had to deal with real-world optical trade-offs, such as sharper images requiring smaller apertures, which in turn let in less light.

The entire setup ran on the MuJoCo physics engine, with learning driven by the PPO (Proximal Policy Optimization) algorithm. All experiments were run multiple times with controlled randomness to ensure robustness and repeatability.

What the Agents Revealed About Eye Evolution

The researchers set out to understand how different visual tasks might shape the evolution of eye design and found that even simple agents, when placed under task-specific pressures, developed surprisingly distinct visual systems.

Agents trained for navigation evolved wide-field, low-acuity vision, much like the distributed eyelets seen in many insects. Meanwhile, those focused on object detection developed forward-facing, high-acuity eyes, resembling simple camera-like systems.

Despite starting from the same basic eye structure, each agent evolved a unique solution tailored to its goal, driven purely by the demands of the task.

Some of these solutions even echoed rare biological designs. One example, for instance, mirrors the eyes of strepsipteran insects, where each eye unit contains multiple photoreceptors (an unusual feature compared to standard compound eyes).

To take the study a step further, the team ran a two-phase experiment to explore how optical structures evolve under different constraints:

  1. In Phase One, agents were only allowed to adjust pupil size. Most ended up with pinhole-style eyes: sharp image formation, but poor light collection.
  2. In Phase Two, the system permitted the evolution of lenses and refractive elements. Almost immediately, agents developed focused lenses that improved brightness while preserving clarity - a key step also seen in biological evolution.

What stood out was that this transition didn’t require any manual guidance. The shift emerged naturally from the agents’ need to perform better in dim conditions, showing how complex optical features can arise from functional necessity alone.

The researchers also examined the relationship between vision and neural resources.

One of the most revealing insights was a power-law scaling effect: as visual acuity improved, the neural capacity needed to interpret that information had to increase in proportion. High-resolution eyes were only as useful as the brainpower available to process their input.

Interestingly, in fast-changing environments like object tracking, agents with limited neural resources still performed well, but only if they had strong temporal memory. By retaining recent visual information, they could compensate for reduced real-time processing power, a strategy that mirrors how some small-brained animals manage perception.

Why This Matters

This study highlights the value of using AI agents as experimental models for evolution. By allowing visual systems to develop in response to real tasks and environmental constraints, the researchers were able to observe how structure and function adapt together.

Crucially, the agents weren’t programmed to seek specific outcomes; they instead uncovered core evolutionary trade-offs on their own. Patterns like the balance between brightness and resolution, or visual acuity and processing power, emerged naturally through task performance, not pre-defined goals.

While the focus was on vision, the framework could be extended to other senses, more complex behaviors, or multitasking systems. For scientists, it offers a novel way to test evolutionary hypotheses. For engineers, it suggests new paths for building adaptive, biologically inspired vision systems.

Conclusion

MIT’s study offers an interesting look at how evolution might have played out differently under alternate conditions, and how artificial intelligence can help us explore those paths.

By simulating entire evolutionary paths in virtual environments, the researchers showed how environmental demands shape both the design of eyes and the brainpower needed to use them.

From basic pinhole vision to high-resolution lenses, from wide-angle scanning to narrow, focused attention, these AI agents uncovered a range of solutions driven entirely by function. The work not only supports existing evolutionary theories but also introduces a flexible platform for testing new ones across biology, neuroscience, and artificial intelligence.

Journal Reference

Tiwary et al. (2025). What if eye...? Computationally recreating vision evolution. Science Advances, 11(51). DOI:10.1126/sciadv.ady2888 https://doi.org/10.1126/sciadv.ady2888

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Nandi, Soham. (2026, January 08). How AI Simulated the Evolution of Eyes and Brains. AZoRobotics. Retrieved on January 09, 2026 from https://www.azorobotics.com/News.aspx?newsID=16296.

  • MLA

    Nandi, Soham. "How AI Simulated the Evolution of Eyes and Brains". AZoRobotics. 09 January 2026. <https://www.azorobotics.com/News.aspx?newsID=16296>.

  • Chicago

    Nandi, Soham. "How AI Simulated the Evolution of Eyes and Brains". AZoRobotics. https://www.azorobotics.com/News.aspx?newsID=16296. (accessed January 09, 2026).

  • Harvard

    Nandi, Soham. 2026. How AI Simulated the Evolution of Eyes and Brains. AZoRobotics, viewed 09 January 2026, https://www.azorobotics.com/News.aspx?newsID=16296.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

Sign in to keep reading

We're committed to providing free access to quality science. By registering and providing insight into your preferences you're joining a community of over 1m science interested individuals and help us to provide you with insightful content whilst keeping our service free.

or

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.