Jump to content


Recommended Posts

Posted

rssImage-ba5265a1e2033cfa52ae200d844a8bcf.jpeg

A new scientific study warns that using artificial intelligence can erode our capacity for critical thinking. The research, carried out by a Microsoft and Carnegie Mellon University scientific team, found that the dependence on AI tools without questioning their validity reduces the cognitive effort applied to the work. In other words: AI can make us dumber if we use it wrong. 

“AI can synthesize ideas, enhance reasoning, and encourage critical engagement, pushing us to see beyond the obvious and challenge our assumptions,” Lev Tankelevitch, a senior researcher at Microsoft Research and coauthor of the study, tells me in an email interview.

But to reap those benefits, Tankelevitch says users need to treat AI as a thought partner, not just a tool for finding information faster. Much of this comes down to designing a user experience that encourages critical thinking rather than passive reliance. By making AI’s reasoning processes more transparent and prompting users to verify and refine AI-generated content, a well-designed AI interface can act as a thought partner rather than a substitute for human judgment.

From ‘task execution’ to ‘task stewardship’

The research—which surveyed 319 professionals—found that high confidence in AI tools often reduces the cognitive effort people apply to their work. “Higher confidence in AI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking,” the study states. This over-reliance stems from a mental model that assumes AI is competent in simple tasks. As one participant admitted in the study, “it’s a simple task and I knew ChatGPT could do it without difficulty, so I just never thought about it.” Critical thinking didn’t feel relevant because, well, who cares. 

This mindset has major implications for the future of work. Tankelevitch tells me that AI is shifting knowledge workers from “task execution” to “task stewardship.” Instead of manually performing tasks, professionals now oversee AI-generated content, making decisions about its accuracy and integration. “They must actively oversee, guide, and refine AI-generated work rather than simply accepting the first output,” Tankelevitch says.

The study highlights that when knowledge workers actively evaluate AI-generated outputs rather than passively accepting them, they can improve their decision-making processes. “Research also shows that experts who effectively apply their knowledge when working with AI see a boost in output,” Tankelevitch points out. “AI works best when it complements human expertise—driving better decisions and stronger outcomes.”

The study found that many knowledge workers struggle to critically engage with AI-generated outputs because they lack the necessary domain knowledge to assess their accuracy. “Even if users recognize that AI might be wrong, they don’t always have the expertise to correct it,” Tankelevitch explains. This problem is particularly acute in technical fields where AI-generated code, data analysis, or financial reports require deep subject matter knowledge to verify.

The cognitive offloading paradox

Confidence in AI can lead to a problem called cognitive offloading. This phenomenon isn’t new. Humans have long outsourced mental tasks to tools, from calculators to GPS devices. Cognitive offloading is not inherently negative. When done correctly, it allows users to focus on higher-order thinking rather than mundane, repetitive tasks, Tankelevitch points out.

But the very nature of generative AI—which produces complex text, code, and analysis—brings a new level of potential mistakes and problems. Many people might blindly accept AI outputs without questioning them (and quite often these outputs are bad or just plain wrong). This is especially the case when people feel the task is not important. “Our study suggests that when people view a task as low-stakes, they may not review outputs as critically,” Tankelevitch points out.

The role of UX

AI developers should keep that idea in mind when designing AI user experiences. These chat UX should be organized in a way that encourages verification, prompting users to think through the reasoning behind AI-generated content.

Redesigning AI interfaces to aid in this new “task stewardship” process and encourage critical engagement is key to mitigating the risks of cognitive offloading. “Deep reasoning models are already supporting this by making AI’s processes more transparent—making it easier for users to review, question, and learn from the insights they generate,” he says. “Transparency matters. Users need to understand not just what the AI says, but why it says it.”

You probably have seen this in an AI platform like Perplexity. Its interface offers a clear logical path that outlines the thoughts and actions that the AI takes to obtain a result. By redesigning AI interfaces to also include contextual explanations, confidence ratings, or alternative perspectives when needed, AI tools can shift users away from blind trust and towards active evaluation of the results. Another UX intervention may involve actively prompting the user for key aspects of the AI-generated output, prompting users to directly question and refine these outputs rather than passively accepting them.The final product of this open collaboration between AI and human is better, just like creative processes are often much better when two people work together as a team, especially when the strengths of one person complements the strengths of the other.

Some will get dumber

The study raises crucial questions about the long-term impact of AI on human cognition. If knowledge workers become passive consumers of AI-generated content, their critical thinking skills could atrophy. However, if AI is designed and used as an interactive, thought-provoking tool, it could enhance human intelligence rather than degrade it.

Tankelevitch points out that this is not just theory. It’s been proven on the field. For example, there are studies that show that AI can boost learning when used in the right way, he says. “In Nigeria, an early study suggests that AI tutors could help students achieve two years of learning progress in just six weeks,” he says. “Another study showed that students working with tutors supported by AI were more likely to master key topics.” The key, Tankelevitch tells me, is that this was all teacher-led: “Educators guided the prompts and provided context,” thus encouraging that vital critical thinking.

AI has also demonstrated that it can enhance problem-solving in scientific research, where experts use it to explore complex hypotheses. “Researchers using AI to assist in discovery still rely on human intuition and critical judgment to validate results,” Tankelevitch notes. “The most successful AI applications are those where human oversight remains central.”

Given the current state of generative AI, the technology’s effect on human intelligence will not depend on the AI itself, but on how we choose to use it. UX designers can certainly help promote good behavior, but it’s up to us to do the right thing. AI can either amplify or erode critical thinking, depending on whether we critically engage with its outputs or blindly trust them. The future of AI-assisted work will be determined not by the sophistication of the technology but by humans. My bet, as with every other technological revolution in the history of civilization, some people will get a lot dumber and others will get a lot smarter.

View the full article

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...