The Cognitive Debt of AI at Work: What the Research Actually Says About Working Adults

Three landmark studies. One uncomfortable pattern. A practical path forward for working adults who want to stay sharp while using AI every day.

← Back to Articles
Abstract visualization of neural connectivity patterns contrasted with AI tool reliance, representing cognitive engagement in knowledge work

In the MIT Media Lab's landmark EEG study, the majority of ChatGPT users could not correctly quote a single sentence from an essay they had written minutes earlier.

The brain-only group, working with no tools at all, had no such trouble. They remembered their own words. They could defend their own arguments. They felt ownership over their own work.

Same prompt. Same time limit. Same scoring rubric. The only variable was whether a human being had done their own thinking or handed it off to an AI assistant.

That gap is not an anti-AI talking point. It's a warning label for working adults, and it points to the single most important skill of the next decade: knowing when to think with AI, when to think alongside AI, and when to close the tab and think for yourself.

A note before we start

I run an AI education company. I teach teams across Michigan how to use these tools every single week. I have built production AI systems, shipped real applications, and work as a fractional AI leader for businesses across the Great Lakes Bay Region. I am not an AI skeptic.

I am also not pretending the research doesn't say what the research says.

The cognitive question isn't going away, and the more honest we are about it now, the better the next five years will go for everyone. This article is for working adults who want the actual data, what it means for your career, and what to do about it starting tomorrow morning.

Let's get into it.

Part 1: The Three Studies That Changed the Conversation

Three major studies published between January 2025 and June 2025 fundamentally reframed the debate about AI and cognition. They are not fringe. They come from MIT, Microsoft Research, Carnegie Mellon, and a peer-reviewed journal out of Switzerland. Every working adult should know what they found.

Study 1: MIT Media Lab, "Your Brain on ChatGPT"

Dr. Nataliya Kosmyna and her team at MIT's Media Lab ran an experiment that is uncomfortable to read if you use ChatGPT every day.

54 participants ages 18 to 39. Three groups. Brain-only, search engine, and LLM. Same SAT-style essay prompts across three sessions. EEG headsets capturing 32 regions of brain activity. A fourth session where they swapped conditions to see what happens when you remove the AI after someone has gotten used to leaning on it.

The findings were consistent and measurable.

Brain connectivity scaled down with the amount of external support. The brain-only group showed the strongest, most distributed neural networks. Search users showed moderate engagement. The LLM group showed the weakest overall coupling. The search engine group alone showed between 34% and 48% less dDTF connectivity compared to the brain-only baseline. LLM users dropped further still.

Memory ownership collapsed. LLM users could not correctly quote from the essays they had just finished writing. The brain-only group had no such difficulty. Self-reported ownership of the work followed the same pattern. The LLM group felt least connected to the writing they had produced, while the brain-only group reported the highest satisfaction and ownership.

The output got worse over time. By session three, many LLM users had essentially stopped participating in the writing. They pasted the prompt, requested edits, and called it done. Two independent English teachers described the essays as "soulless."

The part that matters most for working adults

In session four, the researchers swapped the groups. LLM users who had spent three months leaning on the tool were now asked to write unassisted. Their neural connectivity was weaker than the brain-only group's had ever been. They had lost something. The researchers called it "cognitive debt," and the phrase stuck because it is the most accurate description anyone has put on this phenomenon yet.

The brain-to-LLM swap showed the opposite result. Participants who had first learned to write without AI, then got access to it, showed enhanced brain connectivity across every frequency band. They used the tool better because they had built the underlying skill first.

The study has not yet been peer-reviewed. The sample is small. The task is specific. The authors themselves are careful to note these limits. But the effect sizes are large enough, and the pattern clean enough, that it should not be dismissed.

Study 2: Microsoft Research and Carnegie Mellon, 319 Knowledge Workers in the Wild

If the MIT study tells you what happens inside the brain, the Microsoft and Carnegie Mellon study (Lee et al., 2025, CHI Conference) tells you what happens inside the workflow.

Microsoft Research and CMU surveyed 319 knowledge workers who use AI tools like ChatGPT and Copilot at least once a week on the job. The participants logged 936 real-world examples of AI-assisted tasks across legal work, marketing, product management, software development, and more. The research team then mapped those tasks against a framework for critical thinking.

Two findings stand out.

Higher confidence in the AI was associated with less critical thinking. Workers who trusted the tool's output skipped verification steps. They moved faster. They also engaged less of their own judgment, and when they made errors, those errors often came from AI content they never bothered to scrutinize.

Higher self-confidence in the worker was associated with more critical thinking. The people who believed in their own expertise interrogated AI outputs. They pushed back on suggestions. They verified against external sources. They integrated AI as one input among several, not as an oracle.

This is the most important finding in the entire research literature for working adults, and I will say it plainly: your relationship with AI at work is mediated by your confidence in your own expertise. The less you trust yourself, the more you defer to the tool. The more you defer to the tool, the less you build the expertise that would have let you trust yourself. It is a loop. And it goes one of two directions.

"A key irony of automation is that by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise."

Lee et al., Microsoft Research & Carnegie Mellon, 2025

Atrophied and unprepared. Those are the words Microsoft's own researchers used to describe what heavy AI reliance does to professional workers. Take that seriously.

Study 3: Gerlich (Societies, January 2025), The 666-Participant Deep Dive

Dr. Michael Gerlich at SBS Swiss Business School ran the largest and most methodologically robust study of the three. 666 participants across three age groups (17 to 25, 26 to 45, and 46-plus) with varying education levels. A mixed-methods design pairing standardized critical thinking assessments (including the Halpern Critical Thinking Assessment) with 50 in-depth interviews.

Three findings matter for working adults.

Frequent AI use correlated negatively with critical thinking scores. The more someone used AI in daily life, the lower their performance on standardized critical thinking assessments. This was not a small effect. The correlation was strong and statistically significant.

Cognitive offloading was the mechanism. It wasn't AI exposure itself that drove the decline. It was the habit of outsourcing mental tasks to the tool rather than engaging with them. Two people can use AI equally often and end up in very different cognitive places depending on how they use it.

Higher education provided a buffer. Participants with more formal education showed smaller declines even at high levels of AI use. The researchers attributed this to learned habits of reflection, source evaluation, and skepticism. In other words, if you already know how to think carefully, AI doesn't take that from you as easily. If you don't, it can erode what little you had.

Younger participants, who were both the heaviest AI users and had the least cognitive scaffolding built up, showed the largest declines in critical thinking. That has implications for the entry-level workers coming into your company right now. It also has implications for any worker of any age who has built a daily AI habit without a corresponding habit of critical engagement.

54
MIT Participants
EEG monitored across 4 sessions
319
Knowledge Workers
Microsoft/CMU survey of real AI use
666
Gerlich Study
Mixed-methods across three age groups

Part 2: What Cognitive Offloading Actually Means

Before going further, let's name the mechanism clearly, because it's the single most useful concept in this entire conversation.

Cognitive offloading is the act of handing a mental task to an external system rather than performing it internally. Humans have done this forever. Writing is cognitive offloading. Calculators are cognitive offloading. Your GPS is cognitive offloading. Even your phone contacts list is cognitive offloading. You used to memorize phone numbers. You almost certainly don't anymore.

The research on older forms of cognitive offloading tells us two things.

One, offloading isn't automatically bad. Freeing your brain from memorizing phone numbers gave you room to do more important things. Calculators freed students to work on higher-order math concepts rather than getting stuck on arithmetic. There is a real and legitimate upside to handing certain tasks to tools.

Two, offloading has costs, and the costs are biggest when the offloaded task was the thing that built the skill in the first place. You can't learn to write well by handing your writing to an AI any more than you could learn to play piano by handing the keyboard to a player piano. The practice is the skill.

Why AI is different from every cognitive offload that came before it

A calculator hands off arithmetic so you can focus on reasoning about math. ChatGPT hands off the reasoning itself. When a working adult offloads the drafting of a contract clause, the structuring of a sales email, the analysis of a data set, or the articulation of a strategy memo, they are handing off the exact cognitive work that used to build the professional expertise the job depends on.

That is the cognitive debt MIT was talking about. You use the tool today, get the output faster, and ship the work. Over months and years, the cognitive muscles you would have built doing that work yourself get weaker. One day the tool fails, the context shifts, the stakes rise, and you reach for a capability that isn't there.

The research consensus is clear: AI amplifies whatever you already are. If you are a skilled professional using AI to accelerate work you understand deeply, you get faster and often better. If you are an early-career worker using AI to bypass the work that would have built your expertise, you are building debt.

Part 3: The Four AI User Types at Work

Pull the research together and four distinct user patterns emerge in working adults. Almost every person using AI at work falls into one of these categories.

Type 1: The Anchored Expert

Senior professional with deep domain expertise built pre-AI. Uses AI daily for speed, drafting, summarization, and exploration. Still verifies every meaningful output. Still brings independent judgment. Gains productivity without losing capability. This is the profile that McKinsey, Randstad, and BCG consistently identify as the top performer. Millennials and Gen X in established roles dominate this category.

Type 2: The Skill-Builder

Newer to the role or career. Uses AI intentionally as a learning scaffold rather than a replacement. Drafts without AI first, then uses AI to critique or expand. Treats AI as a tutor, not a ghostwriter. The brain-to-LLM group in the MIT study maps to this type. They build expertise and AI fluency in parallel. Smaller group than it should be.

Type 3: The Performer

Uses AI to produce high volume and polished-looking output. Looks productive to managers. Passes surface review. Struggles the moment something unexpected happens, because the underlying reasoning was never theirs. Loses credibility in meetings where the AI can't help. This is the cognitive debt group. Looks good on the dashboard, shakes out under pressure. The largest and fastest-growing category.

Type 4: The Fully Dependent

Cannot complete core tasks without AI. Cannot explain AI output in their own words. Cannot catch AI errors because they have no independent framework for judgment. Starting to struggle in conversations and meetings where AI is not available. This is the worst outcome of the MIT study scenario, and it's already showing up in the workplace among workers who adopted AI before they had built any real craft.

The distribution of these types inside your own company is the single most useful thing to know about your AI strategy. You don't have an AI problem. You have a people-in-AI-categories problem, and it requires a different intervention for each category.

Part 4: What the Research Says Working Adults Should Actually Do

Let's translate research into practice. Every point below is anchored in at least one of the three major studies, plus adjacent work from Microsoft's follow-up research, McKinsey, and BCG.

1. Build the skill first. Use the tool second.

This is the single most important lesson from the MIT study. The brain-to-LLM group outperformed the LLM-to-brain group by a wide margin. If you are entering a new domain, writing a new kind of document, tackling a new kind of analysis, do the first few versions without AI. It will be slower. It will be frustrating. It will also build the internal model that lets you actually evaluate AI output when you do bring the tool in.

If you are a new hire in any knowledge role, treat your first 90 days as the cognitive foundation phase. Write without AI. Analyze without AI. Present without AI. Then layer AI on top. The people who will own their careers in 2030 are the ones building this foundation now.

2. Keep your self-confidence higher than your AI-confidence.

The Microsoft study is unambiguous on this point. Workers who trusted their own expertise more than they trusted the tool engaged in more critical thinking and produced better work. Workers who trusted the tool more than themselves skipped verification and missed errors.

This is a mindset discipline. Every time you accept an AI output, ask: "Do I understand why this is correct? Could I defend this in a meeting without mentioning the AI?" If the answer is no, you are not yet qualified to ship that work. Go build the understanding first.

3. Create AI-free zones in your week.

The researchers at MIT and the Microsoft team both recommend what they call "thinking spaces," meaning intentional time blocks where you work without AI assistance. Do your weekly planning without AI. Write your most important strategy memos without AI. Brainstorm product ideas without AI for the first pass, then bring the tool in for critique.

I recommend working adults block at least 30% of their cognitive work as AI-free. That is not a number from a specific study. It is a field-tested ratio from teaching hundreds of business professionals in the Great Lakes Bay Region.

4. Use AI as a critic, not a creator.

The highest-performing knowledge workers in the Microsoft sample used AI disproportionately for verification, integration, and stewardship rather than generation. They wrote first, then asked the AI to poke holes. They drafted first, then asked for three critiques. They reasoned first, then asked for counter-arguments.

Generate-then-critique is the cognitive debt workflow. Critique-then-integrate is the skill-building workflow. Same tool. Entirely different cognitive outcome.

5. Explain AI output in your own words before you ship it.

If you cannot explain an AI output to a colleague without reading the AI output, you do not understand it well enough to stand behind it. This single test would eliminate most of the shallow AI work flooding into meetings, memos, and client deliverables right now.

Write the summary version in your own words. Rephrase the analysis in your own voice. If you can't, rewrite the AI output until you can. That translation is where the learning happens.

6. Track the skills you are and are not building.

Once a month, audit yourself. What could you do six months ago without AI that you can still do without AI today? What have you stopped being able to do unassisted? If you are losing capability faster than you are gaining it, your AI habit is net negative even if your output has increased.

The MIT "cognitive debt" framing is useful precisely because it forces you to think about the liability side of your AI ledger, not just the asset side.

Part 5: What Companies Should Actually Do

The working adult does not carry this problem alone. Companies create the conditions, and the conditions inside most organizations right now are making the cognitive problem worse, not better.

Redesign performance reviews around human judgment, not just output. Right now, most knowledge work performance evaluation measures throughput and surface quality, both of which AI inflates without building underlying capability. If you measure only output, you reward cognitive debt. Build into your evaluation framework: Can this person explain their work without AI? Can they handle exceptions? Can they teach the work to someone junior?

Invest in real AI training, not webinars. The Adecco Global Workforce Report found that only 25% of workers get formal AI training from their employer. That needs to be closer to 100%, and the training needs to be role-specific, hands-on, and oriented toward the generate-versus-critique distinction. Teaching people to write better prompts is not AI training. Teaching them how to remain cognitively engaged while using AI is AI training.

Create explicit AI-free deliverables. At least some categories of work inside your company should be produced without AI assistance. Strategic planning, performance reviews, early-stage creative concepts, sensitive customer conversations, specific analytical exercises. Tell employees what those categories are. Protect them. Evaluate on them.

Audit for dependency, not just adoption. Most AI dashboards track how many employees are using AI. That is the wrong metric. Track whether your people are building or losing capability. Ask managers to identify which team members can no longer complete core tasks unassisted. Ask honest questions about early-career employees who look productive on paper but can't think through a problem in a real-time conversation.

Lead from the front. The single most consistent finding in organizational AI research is that companies where leaders actively and thoughtfully use AI themselves outperform companies where leadership outsources AI engagement to IT. But "use it themselves" has to include engaging with the cognitive question, not just celebrating the speed gains. When the CEO can articulate a view on what kinds of thinking should be handed to AI and what kinds should not, the rest of the company can follow.

The Bottom Line

The research is not saying AI is making us dumber. The research is saying AI is making many of us stop practicing the thinking that made us capable in the first place, and the consequences are starting to show up in memory, in judgment, in output quality, and in neural connectivity patterns you can measure with an EEG.

The answer is not to stop using AI. That ship has sailed, and honestly, it should have. These tools are the most powerful professional development leverage most workers have ever had access to.

The answer is to use them the way the top performers already do. Build the skill first. Trust yourself more than the tool. Create space to think without assistance. Use AI as a critic more than a creator. Translate every AI output into your own words before you ship it. Audit your capabilities monthly. Treat your brain the way a lifter treats a body. The work that feels hard is the work that builds you.

The workers who do this will pull away from the rest of the workforce over the next decade in a way that nobody currently predicting AI's impact on jobs has properly accounted for. Not because AI replaces them. Because they will be the ones running the companies, advising the clients, shipping the real products, and making the calls that matter, while everyone else waits for the loading indicator.

The threat was never artificial intelligence.

The threat was always cognitive laziness dressed up as productivity.

Build the skill. Use the tool. In that order. Every time.

TB

Tim Bish

Tim cuts through AI hype to deliver research-backed insights for business leaders and technology professionals. He helps teams build practical, strategic AI capabilities through hands-on training and education in the Great Lakes Bay Region and beyond.

Cognitive Neuroscience and AI

  1. Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X., Beresnitzky, A. V., Braunstein, I., Maes, P. (2025). "Your Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant for Essay Writing Task." arXiv preprint arXiv:2506.08872. MIT Media Lab.
  2. Time Magazine (June 2025): Coverage of MIT ChatGPT brain study, including commentary from Dr. Zishan Khan on cognitive and psychological consequences of LLM overreliance.

Critical Thinking and Knowledge Work

  1. Lee, H., Sarkar, A., Tankelevitch, L., Drosos, I., Rintel, S., Banks, R., Wilson, N. (2025). "The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers." Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems. Microsoft Research and Carnegie Mellon University.
  2. 404 Media (February 2025): Coverage of Microsoft study, including "atrophied and unprepared" quote from the research team.

AI Tool Use and Cognitive Offloading

  1. Gerlich, M. (2025). "AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking." Societies 15(1), 6. MDPI. (Corrected September 2025.)
  2. PsyPost (March 2025): Detailed coverage of the Gerlich methodology, including Halpern Critical Thinking Assessment and interview protocol.

Workforce AI Adoption and Training Data

  1. Adecco Group (2025): Global Workforce of the Future Report. Worker AI training rates and productivity savings.
  2. Microsoft, BCG, and McKinsey (2025 to 2026): Supporting data on leadership AI engagement and workflow redesign as predictors of AI ROI.

Context and Framework

  1. Akgun, M., Toker, E. (2024): AI and memory retention research cited in Harvard Gazette coverage of AI and cognition.
  2. Habib et al. (2024): AI and creative confidence research in the Alternative Uses Task context.
  3. Bai et al. (2023): ChatGPT and student engagement review.
  4. Cognitive Load Theory (Sweller, various): Framework for understanding germane vs. extraneous cognitive load in AI-assisted learning.