AI’s Greatest Threat to Nature May Be Societal, Not Technological, Say Cambridge Researchers

Artificial intelligence may offer powerful tools for conservation, but researchers at the University of Cambridge warn its biggest danger lies in how it reshapes society - not in the technology itself. They’re calling for urgent, inclusive governance to ensure AI genuinely supports the planet’s most vulnerable ecosystems.

Sunset mountain forest, nature landscape.

Image Credit: DamianZurawski/Shutterstock.com

In a recent article from the University of Cambridge, scientists cautioned that while AI holds enormous promise for biodiversity protection, its unchecked use could end up harming the very ecosystems it's meant to help.

AI can process huge datasets and generate valuable insights, but it’s not a self-sufficient solution. Key concerns include its significant energy demands, the risk of deepening global inequalities, and broader societal shifts that could indirectly accelerate environmental degradation.

The Need for Human Intelligence and Ethics

The rapid progress of AI has sparked excitement across fields like conservation, where it’s often seen as a way to massively extend human capacity. But researchers at Cambridge’s Conservation Research Institute stress that this excitement must be tempered with critical thinking.

Blindly trusting AI, they argue, is like following a SatNav into a ditch: technology must be guided by human judgment and ethics.

AI is trained on human knowledge, particularly in conservation, where insights often come from working closely with local communities and governments. Those perspectives need to be embedded in any AI-based system. Yet many conservation professionals still don’t fully understand what AI is or how it could help their work, highlighting a major gap in adoption.

The core challenge is to use AI wisely and inclusively, making sure it benefits all stakeholders, not just those with access to advanced tech.

The Dual-Edged Sword of AI in Conservation

An international "Horizon Scan," led by Professor Bill Sutherland, brought together conservationists and AI experts to identify 21 high-impact ways AI could advance conservation, selected from an original list of 104. These range from existing tools like species-identifying apps to more advanced proposals, such as using undersea fibre-optic cables as enormous marine microphones. By detecting signal disturbances caused by marine life, this system could help monitor hard-to-track species like whales and seals in real time.

But alongside the promise comes significant risk. AI infrastructure (supercomputers, specialized skills, and high electricity usage) is resource-intensive and largely concentrated in wealthier countries. This creates a serious equity gap.

As Dr. Sam Reynolds points out, many of the Earth’s most critical ecosystems are in the Global South, where local communities often lack access to these technologies. This can result in data being extracted from the South, processed in the North, and then used to dictate policies or land use without meaningful input from those on the ground.

This imbalance can also skew funding.

Flashy, AI-heavy projects led by elite institutions often get prioritized over grassroots, community-led conservation work. If left unregulated, AI could reinforce the very power structures that have historically undermined effective conservation efforts.

Societal Ripples and a Cautious Path Forward

Professor Chris Sandbrook argues that focusing only on AI’s direct conservation uses misses the bigger picture. AI-driven societal shifts could have wide-ranging, indirect effects on biodiversity. For example, AI-assisted precision agriculture could reduce land use, a major win for ecosystems. But on the flip side, AI could contribute to medical advances that extend life expectancy, mainly in wealthy countries, leading to greater resource consumption and, ultimately, more pressure on the natural world.

There's also the risk of power consolidating in the hands of a few large tech companies, which could then influence policy and resist environmental regulation.

To address this, Cambridge researchers are working with policymakers, CEOs, and conservation groups to raise awareness of how AI’s broader societal impacts could ripple through the natural world.

Dr. Reynolds, for instance, has proposed frameworks for fair and effective AI deployment, with an emphasis on real-world testing and learning. His team developed an AI “Conservation Co-Pilot” that draws from two decades of expert conservation knowledge. The project highlighted both the value and the limitations of AI, particularly how much depends on the quality and source of the training data.

Reynolds has shared these insights with organizations including the Wildlife Trusts, the Chartered Institute of Ecology and Environmental Management (CIEEM), and BP, showing growing cross-sector interest in responsible AI use. But the current market remains a “Wild West,” where tools are often sold to conservation groups that don’t fully understand their capabilities or risks.

The path forward is not rejection but thoughtful integration. AI can unlock possibilities once out of reach, but it comes with challenges that must be addressed through informed, inclusive collaboration.

Conclusion

AI won’t save nature on its own. It needs our guidance. As a tool, it holds enormous potential, but it also brings real risks. The future of conservation depends on taking a clear-eyed, ethical, and equitable approach. That means asking hard questions about who benefits, who’s left out, and how the technology affects ecosystems both directly and indirectly.

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Nandi, Soham. (2025, December 04). AI’s Greatest Threat to Nature May Be Societal, Not Technological, Say Cambridge Researchers. AZoRobotics. Retrieved on December 04, 2025 from https://www.azorobotics.com/News.aspx?newsID=16274.

  • MLA

    Nandi, Soham. "AI’s Greatest Threat to Nature May Be Societal, Not Technological, Say Cambridge Researchers". AZoRobotics. 04 December 2025. <https://www.azorobotics.com/News.aspx?newsID=16274>.

  • Chicago

    Nandi, Soham. "AI’s Greatest Threat to Nature May Be Societal, Not Technological, Say Cambridge Researchers". AZoRobotics. https://www.azorobotics.com/News.aspx?newsID=16274. (accessed December 04, 2025).

  • Harvard

    Nandi, Soham. 2025. AI’s Greatest Threat to Nature May Be Societal, Not Technological, Say Cambridge Researchers. AZoRobotics, viewed 04 December 2025, https://www.azorobotics.com/News.aspx?newsID=16274.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

Sign in to keep reading

We're committed to providing free access to quality science. By registering and providing insight into your preferences you're joining a community of over 1m science interested individuals and help us to provide you with insightful content whilst keeping our service free.

or

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.