Leading Harvard scholars have discussed their opinions and perspectives on the relationship between artificial intelligence (AI) and human cognition. They discussed the potential for AI to both aid and hamper critical thinking, exploring the risks of cognitive decline from overreliance and the conditions under which AI can serve as a productive tool.
A small, non-peer-reviewed MIT Media Lab study has delivered a cautionary warning, suggesting that excessive reliance on AI-driven solutions may contribute to cognitive decline and a decrease in critical thinking abilities.
Even artificial intelligence assistants themselves seem to echo this concern - when asked directly, ChatGPT stated that the outcome “depends on how we engage with it: as a crutch or a tool for growth.”
To explore this complex issue, a range of disciplinary experts, including researchers in education, philosophy, and teaching and learning, were consulted.
Their collective insights provide an examination of how AI can either augment or undermine the intricate processes of the human mind, framing the current moment as a critical time for defining our relationship with intelligent technology.
The Comparative Strengths of Human and AI
Experts argue that human cognition, while computational, possesses distinct advantages over current AI. Tina Grotzer, a Principal Research Scientist in Education, contends that human minds are “better than Bayesian.” She cites neuroscience work, such as that of Antonio Damasio, which shows how somatic markers enable rapid, intuitive leaps.
Research from her lab found that kindergarteners used strategic information to make informed moves in a game more quickly than a purely Bayesian approach. Human minds can detect critical distinctions or exceptions to patterns that drive conceptual change, a capacity that transcends the data summation of AI.
She also notes that, while AI can offer analogies, it cannot reason analogically - an ability that remains uniquely human.
Fawwaz Habbal, a Senior Lecturer on Applied Physics, reinforces this, stating that while AI excels in data processing and statistics, it lacks the ability to create truly innovative solutions. He emphasizes that AI machines rely on data created by humans, leading different platforms to often produce similar answers because the database is substantially the same.
Critical thinking, he asserts, requires human experience, insight, and ethical and moral reasoning, capacities that machines currently lack, as their processes are merely recursive. The development of leadership and the ability to add new value to society remain a human enterprise.
Navigating the Dual Potential of AI as a Tool or a Crutch
The consensus among experts is that the effect of AI on learning is not intrinsic but depends entirely on its application.
Dan Levy, a Senior Lecturer in Public Policy, clarifies that there is no such thing as “AI is good for learning” or “AI is bad for learning.” The critical factor is active mental engagement. If a student uses AI to do the work for them rather than with them, no meaningful learning occurs.
Levy draws a crucial distinction between the output of an assignment and the ultimate goal of learning, warning that confusing the two leads to counterproductive AI use. While AI may help students save time on repetitive tasks, Levy warns that without active engagement, this time-saving advantage can turn counterproductive.
Christopher Dede, a Senior Research Fellow, offers the metaphor of AI as the owl on the shoulder of Athena, the goddess of wisdom. The key is not to let the owl do your thinking for you. He warns that using AI just to do the same old stuff better and quicker results in a faster way of doing the wrong thing. He cautions that practices like using AI to write a first draft can undercut critical thinking and creativity, potentially leading to homogenized outputs.
Karen Thornber, a Professor of Literature and Faculty Director, compares the effect to that of turn-by-turn navigation systems, which have degraded our detailed mental maps of cities. She warns that the ease of using large language models (LLMs) may allow us to avoid engaging in challenging mental skills, making it difficult to persuade students to develop them initially.
Jeff Behrends, a Senior Research Scholar in Philosophy, expresses deep worry, pointing to cautionary tales from other cognitive tools. He notes that taking notes longhand leads to greater recall than typing, and predictive text changes our word choices. Given these trends, he states he would be stunned if frequent use of LLMs did not alter how users approach reasoning tasks. Behrends advises caution against the hype that presents LLMs as limitless general reasoners, emphasizing that it is in the interest of technology producers to promote this view.
Conclusion
In conclusion, each expert in one way or another came to one clear, albeit complex, path forward.
The evidence suggests that overreliance on AI poses a demonstrable threat to critical thinking, a concern supported by early-stage studies and parallels with earlier cognitive tools.
However, AI's capacity to process vast datasets also presents a unique opportunity to augment human reasoning if applied judiciously.
The ultimate impact rests on a critical distinction: to engage with AI as a subordinate tool that manages computational tasks, thereby freeing the human mind for its unique strengths of judgment, creativity, and ethical reasoning, or to allow it to become a crutch that gradually dulls the very cognitive abilities it was designed to complement.
The responsibility for this choice simply lies with the human user.
Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.