Researchers from Harvard have been exploring the major role artificial intelligence (AI) is having on humanity.
While some panelists warned of this intellectual disempowerment, others proposed redesigning technology to foster community and shared meaning. The discussion also highlighted the need to cultivate "spiritual intelligence" - characterized by reverence and receptivity - within technological development, moving beyond a framework of domination and control.
Background
The rise of AI and its consequences for the human soul were the central theme of a recent interdisciplinary panel. The event, hosted by the new Public Culture Project, aimed to place humanist thinking at the center of contemporary technological debates.
Experts from computer science, sociology, comparative literature, and philosophy offered contrasting perspectives. The discussion moved beyond purely commercial or utilitarian concerns to address fundamental questions about human fulfillment, cognitive engagement, and the spiritual dimensions of our relationship with technology. Panelists investigated whether AI tools enhance or inhibit human agency, creativity, and our connection to community, revisiting enduring philosophical questions through the urgent lens of modern digital innovation.
The Cognitive and Personal Cost of Convenience
Research scientist Nataliya Kos’myna presented a compelling empirical study that highlighted the potential dangers of over-reliance on generative AI. In her experiment, 54 students were divided into three groups to write an essay, where one group used ChatGPT, another used the internet and Google, and a third relied solely on their own knowledge.
The results were striking. The ChatGPT group exhibited "much less brain activity" during the task, suggesting a lower level of cognitive engagement. Furthermore, their essays were homogenized, primarily focusing on career choices as the source of happiness, unlike the other groups, who explored more diverse themes like giving and the nature of true happiness itself.
This cognitive disengagement had a profound personal consequence, which was a lack of ownership over the work. When asked to quote a line from their own essays just one minute after submission, a staggering 83 % of the ChatGPT group could not recall anything, compared to only 11 % in the other two groups. Kos’myna interpreted this to mean that the users "didn’t feel much ownership" and "didn’t remember, didn’t feel it was theirs."
She emphasized that "your brain needs struggle" to bloom - tasks that are too easy hinder true learning and engagement. This finding suggests that when AI removes that necessary struggle, it risks creating a passive relationship with knowledge, where information is consumed and regurgitated without being truly processed, understood, or made one’s own.
Moira Weigel, assistant professor of comparative literature at Harvard, provided historical context, observing that debates about automation and creativity date back to the 19th century. She posed enduring questions such as: What is the purpose of work? Should a “good” society seek to automate all labor? Do technologies expand our agency - or begin to control us? And when does technical craft become true art?
Reflecting on large language models in education, Weigel suggested they may help us rethink what is essential about learning and creativity, challenging assumptions about how different we truly are from machines.
Reimagining Technology for Community and Reverence
E. Glen Weyl of Microsoft Research advocated for moving beyond a problem-centric view, which he believes "disempowers us," and instead actively "redesigning systems" to foster community.
For instance, he critiqued the current, commercially driven model of social media, which often prioritizes individual engagement over collective experience. Weyl suggested engineering digital feeds to create a shared "theory of mind," akin to the unified experience of attendees at a concert. By making users aware that their community is encountering the same information simultaneously, technology could cultivate a sense of common understanding and shared meaning, much like iconic Super Bowl ads that sell a communal identity rather than just a product.
This vision of a more humane technology was expanded upon by sociologist Brandon Vaidyanathan, who introduced the concept of "spiritual intelligence." He contrasted the prevailing technological mindset - rooted in domination, extraction, and fragmentation - with values he sees reflected in many scientists: reverence, receptivity, and reconnection.
According to Vaidyanathan, scientific inquiry is often driven not just by curiosity, but by a profound sense of wonder, something deeply spiritual in nature. He illustrated this idea with the story of a researcher who examined a Salmonella bacteria needle with the same reverence one might reserve for a sacred object. From this perspective, the core issue isn’t just how we use technology, but what values it reinforces.
To develop tools that foster reverence instead of control, Vaidyanathan argued, we may need to embrace intentional periods of disconnection, creating the mental space for deeper reflection and meaningful connection. Such a shift could fundamentally reorient how we relate to the digital tools we design and use.
Conclusion
In conclusion, the panel ultimately emphasized that AI's impact on humanity is not predetermined but is a matter of conscious choice and design.
The technology presents a dualism. It can be a tool that fosters intellectual passivity and homogenization, or it can be harnessed to build community, shared meaning, and even spiritual reverence for the world. The path forward requires a deliberate effort to reshape our digital systems, moving beyond purely commercial and utilitarian models.
Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.