Are We Actually Using AI to Create Time Freedom — Or Is This a Zero-Sum Game?
What ChatGPT Is Quietly Doing to the ADHD Brain — and How to Use It Without Losing Yourself
I opened ChatGPT to draft a single email.
I needed to send it. It was three sentences. The kind of thing I'd written ten thousand times in my career. But that day, my brain wouldn't start. The cursor blinked. I pulled up the model.
Forty minutes later, I had not sent the email.
What I had, instead, was a refined business plan I wasn't going to execute, three versions of a workout schedule I wasn't going to follow, and the model's opinion on a parenting decision I'd already made the day before. The email was still sitting there, unwritten, the cursor still blinking. I closed the tab and felt that specific flat, slightly disoriented feeling I now recognize the way I recognize a low-grade hangover full but not fed, busy but not productive, stimulated but somehow more depleted than when I started.
I am a psychiatric nurse practitioner. I have ADHD. I know better. And I still spent forty minutes outsourcing the wrong things to a tool that was perfectly happy to keep me there.
That moment isn't really about me. It's about all of us.
Are we actually using AI to create more time and freedom — or are we just trading one form of dopamine dysregulation for another?
I want to be honest about my answer, because the truth is more complicated than the productivity influencers and the AI critics will tell you. AI is helping me. AI is also hurting me. Both are true at the same time, and the difference between the two depends almost entirely on whether I know how to set myself free from a tool that was engineered to keep me using it.
Especially if your brain has ADHD. Especially then.
The Dual Truth Most People Are Refusing to Hold
Here's what almost no one is willing to say plainly: AI is a genuine clinical aid AND a dopamine delivery system designed to maximize engagement. Both facts are sitting in the peer-reviewed literature right now, and pretending one of them isn't there is how people get hurt.
A 2025 systematic review in Brain Sciences — looking specifically at AI chatbots and executive function found that conversational AI can meaningfully support the cognitive skills that ADHD brains struggle with: planning, organization, working memory, task initiation. The limbic system and prefrontal cortex are connected through neurotransmitters like dopamine and norepinephrine, and executive functioning can be trained and improved at various ages through repetition in different practices. AI, used well, can be that repetition. A 2025 study in BMC Psychology went further: it examined whether ChatGPT could help clinicians design rehabilitation plans for ADHD adults, and the answer was a qualified yes not as a replacement but as a complementary aid.
That's the help side. It's real. ADHD adults are using AI to draft emails they couldn't start, to organize tasks they couldn't sequence, to break overwhelm into pieces small enough to act on. I see this in my practice. I do this myself. The benefit is not imaginary.
And at the same time, a separate 2025 study presented at the CHI Conference on Human Factors in Computing Systems specifically examined how AI chatbot interfaces are engineered to exploit the brain's reward system. Researchers identified five primary dopamine mechanisms that AI chatbots exploit: reward uncertainty, near-miss effect, reward-predicting cues, social rewards, and reward timing. They termed these "dark addiction patterns" design elements that may encourage addictive behavior. Another 2025 study published in npj Science of Learning used neuroimaging to show that motivational feedback from AI chatbots increased activity within brain regions associated with reward processing, particularly the ventral striatum.
Translation: the same circuit that lights up when a slot machine almost-pays-out is the circuit AI chatbots are activating, on purpose, by design. The exact circuit that is already dysregulated in ADHD.
So this is the situation. The literature does not say AI is good for ADHD. The literature does not say AI is bad for ADHD. The literature says AI hits the ADHD reward system harder than almost anything we've encountered, and whether that's a feature or a flaw depends entirely on how you use it.
Why This Matters More If You Have ADHD
If you read my last post on dopamine and ADHD, you already know the foundation: ADHD is fundamentally a disorder of dopamine regulation. The brain is constantly searching for stimulation that produces a hit, and it will find a source whether you consciously choose one or not.
For most of human history, the available dopamine sources were relatively low-bandwidth: food, sex, social connection, accomplishment, and novelty. Even social media, as engineered as it is, requires you to wait for someone else's input to refresh. AI doesn't. AI is on demand, infinite, immediately responsive, and tuned to give you the answer-shaped object your brain was looking for. The effort-to-reward ratio is essentially zero. The dopamine you used to earn over an hour of effortful thinking, you now get in fifteen seconds.
For a regulated brain, that's a productivity miracle. For a brain that was already searching for dopamine, a brain that already has trouble stopping once stimulation begins, that's a fire hose pointed directly at a system that was already overwhelmed.
This is the part I want to be unflinching about, because it's the part I think most clinicians are not yet saying out loud: AI is the most efficient dopamine source the ADHD brain has ever encountered. We are not psychologically prepared for what that means.
The Signs You Don't Connect to AI Use
These are the patterns I'm starting to see in my practice, and the ones I notice in myself. They're worth paying attention to, especially if you have ADHD or suspect you might.
1. You can't start tasks without it anymore. You used to draft an email, outline a project, or write a paragraph on your own. Now those things feel harder than they used to without the model. The scaffolding has become structural.
2. The "AI hangover." A long working session feels productive in the moment and strangely flat afterward. You're not energized. You're not satisfied. You feel slightly stupid, slightly disoriented, like you ate something that filled you up without feeding you. That's the dopamine crash after a high-stimulation, low-genuine-effort session.
3. You're outsourcing decisions you used to make. What to eat. What to wear. How to phrase a hard text. Whether to take the job. AI will give you an answer, and it will sound reasonable. But the muscle that makes those decisions, the prefrontal cortex doing slow, effortful weighing atrophies when it doesn't get used. People with ADHD already struggle with this muscle.
4. Real boredom feels intolerable. Waiting in line, sitting through a slow meeting, twenty minutes with your own thoughts those moments feel almost physically uncomfortable now. The phone comes out. The model gets opened. Your tolerance for un-stimulated states has collapsed.
5. You're using AI to avoid hard feelings, not just hard tasks. This is the one most people miss. AI is replacing the friction of being a person. Difficult conversation? Ask the model how to phrase it. Difficult emotion? Type it out and let the model reflect it back. ADHD brains already use stimulation to regulate emotion, and AI becomes another regulation tool, a profoundly inadequate one.
6. Your sense of your own competence is shifting. You used to know what you were good at. Now you're not sure if the thing you produced was yours or the model's. The line is blurring, and with it, your trust in your own judgment.
📋 If several of those felt familiar, that's worth paying attention to. Not as a moral failing. As information about what your brain is doing.
Zoom Out: The Real Question Underneath the Email
Now back to the email I never wrote.
What was actually happening in those forty minutes wasn't laziness. It wasn't even procrastination, exactly. It was a brain that was struggling to do something hard, encountered a tool that offered an easier dopamine path, and took it. The email required executive function I was short on that day. ChatGPT required none. The brain went where the path was open.
Here's the part that stayed with me: I felt productive the entire time. I was generating output. I was getting responses. The dopamine system was firing. From the inside, it felt indistinguishable from real work. Only afterward, looking at an empty inbox and a closed laptop, did I realize I had spent forty minutes doing things that didn't need to exist.
This is the zero-sum trap. AI didn't give me time back. It took time and convinced me I had used it well. That's not productivity. That's a slot machine that prints reasonable-sounding paragraphs.
But, and this is where I have to keep both truths in my hand at once, the same week, I used AI to break a research project into a project plan that I genuinely could not have built unaided that day. It saved me hours. It got me started on something I had been avoiding for weeks. The same tool, the same brain, completely different outcome.
The variable wasn't the tool. The variable was whether I knew what I was using it for, and whether I could stop when the task was done.
The Diagnostic Question: Is It Helping or Replacing?
Here's the question I now ask myself before I open the model. It's the only question that matters.
Am I using AI to do something I would do anyway, just faster — or to replace something I should be developing in myself?
Helping looks like: drafting with AI and then rewriting in your own voice. Using it to summarize research you'd have read more slowly. Using it to break down a task that overwhelm was keeping you from starting at all. Using it as a thinking partner on questions you've already been wrestling with on your own.
Replacing looks like: using AI to make decisions you used to make. Using it to write things in your voice that you can no longer produce in your voice. Using it to avoid the discomfort of thinking, feeling, or sitting with not-knowing.
The first pattern compounds. You get better, faster, more capable. The second pattern atrophies. You get more dependent, less confident, less yourself.
For ADHD brains, this distinction is everything. Our executive function is already vulnerable. Our prefrontal cortex is already underactive. The thing we cannot afford to do is outsource the very functions we are trying to strengthen.
How to Set Yourself Free from a Tool Designed to Keep You
I'm not a luddite. I use AI. I'll keep using it. But I've changed how I use it, and these are the same boundaries I'm working on with patients who want to keep the help without paying the cost.
Use it as scaffolding, not substitution. Draft with AI, but rewrite in your own words. Use it to outline, not to think. Let it lower the activation energy on starting, and then do the cognitive work yourself.
Time-box every session. ADHD brains lose time without external cues. The forty-minute session that was supposed to be five minutes is the rule, not the exception. Set a timer before you open the tab. When it goes off, close the tab. The dopamine pull is real and it does not respect your intentions.
Notice the hangover. After a session, check in: do you feel more capable, or more drained? More confident, or less? More yourself, or less? The answer is data. Use it to calibrate the next session.
Keep at least one domain AI-free. Pick something — journaling, a creative practice, hard conversations, decisions about your own life — and protect it. Your brain needs places where it has to do the work itself, or those circuits go quiet.
Do not use AI as a therapist. I'm seeing this in practice and it concerns me. AI can be a journaling tool. It cannot hold a clinical relationship, recognize when you're decompensating, or notice the things you aren't saying. If you're using AI to manage real psychiatric symptoms anxiety, depression, trauma, ADHD, that's a sign you need a clinician, not a better prompt.
Treat the underlying ADHD. This is the part most people skip. If your AI use feels compulsive, if you can't stop, if every attempt at boundaries fails, the AI isn't the problem. The dopamine dysregulation underneath is. Treating ADHD properly, medication when appropriate, sleep, exercise, reduced competing stimulants, real clinical support changes what AI feels like to use. When the underlying brain is regulated, AI becomes a tool again instead of a compulsion.
The Takeaway: It Keeps You Organized. It's Also Engineered to Keep You.
So is it a zero-sum game?
It depends on the brain, the use pattern, and what it's replacing.
For some people, including me on my best days, AI is genuinely buying back time. It compresses work. It frees up attention for what matters. It scaffolds the executive function that ADHD makes unreliable. The tool serves the life.
For others, and I think this is the larger group than we want to admit, AI is the latest in a long line of stimulation we're using to manage a nervous system that was already running on fumes. We're not getting time back. We're trading one form of dopamine-seeking for a more efficient one. And in the trade, we're losing something quieter, our capacity for slow thought, our tolerance for un-stimulated states, our trust in our own judgment.
For ADHD brains specifically, the risk is higher and the potential is higher. AI can be the executive function support our brains have always needed. AI can also be the dopamine spiral that finally breaks us.
Both are true. Which one happens depends on whether you know how to use the tool, and how to put it down.
You don't have to white-knuckle through this either. But you do have to know the difference between using AI well and being used by it, and most people I see in my practice can feel the difference even before they have language for it.
If something in this post named the feeling you've been having, that's the data. Trust it.
If You're in the Portland Tech Industry, This One's For You
If you work in tech in Oregon, Intel, Nike, Tektronix, Columbia, any of the smaller shops in between, you are at the front of this. You were early to AI because your job required it. You use it more hours per day than almost any other professional population. And if you have ADHD, diagnosed or not, you're running a brain that was already vulnerable to the exact mechanisms these tools are built around.
This isn't a moral problem. It's a clinical one. The lost hours, the AI hangover, the creeping sense that something about your focus or your judgment has shifted in the last two years, those aren't character flaws. They're symptoms of an under-supported nervous system trying to operate inside a stimulation environment it was never designed for. They're worth a conversation. Not with a chatbot. With a clinician.
Adult ADHD evaluations and medication management are available through Empower Mental Health, a telehealth psychiatric practice serving Oregon — including the Portland metro tech corridor. I'm in-network with BCBS Regence Oregon and UnitedHealthcare, including most employer plans through Intel, Nike, and other Beaverton-area companies. Visits are secure, telehealth-only, and built to work around standups and sprint cycles, not against them.
We do the work AI can't: a real evaluation, a real diagnosis, a real treatment plan, and a real human keeping track of how you're actually doing over time.
The tools aren't the problem. An untreated brain trying to use them is. Treat the brain, and the tools become tools again.
If you're in Oregon and ready to find out what's actually going on, reach out today.
📞 Contact Empower Mental Health →empowermental.net/contact
About the Author
Navi Hughes, PMHNP-BC, is a board-certified psychiatric nurse practitioner and the founder of Empower Mental Health, a telehealth psychiatric practice licensed in Oregon. She specializes in ADHD evaluation and treatment in adults — particularly tech professionals, engineers, and high-functioning Portland-area workers — with a focus on the neurological underpinnings of attention, dopamine regulation, and the ways modern technology is reshaping both.
Sources cited:
Pergantis, P., et al. (2025). AI Chatbots and Cognitive Control: Enhancing Executive Functions Through Chatbot Interactions: A Systematic Review. Brain Sciences, 15(1), 47.
Yin, J., et al. (2025). Effects of different AI-driven Chatbot feedback on learning outcomes and brain activity. npj Science of Learning, 10, 17.
The Dark Addiction Patterns of Current AI Chatbot Interfaces. (2025). Proceedings of the CHI Conference on Human Factors in Computing Systems.
Exploring AI-assisted design of executive function rehabilitation programs for individuals with ADHD. (2025). BMC Psychology.
Suggested tags: ADHD, adultadhd, oregonpsychiatry, adhdsymptoms, adhdportland, adhdbendoregon, adhdoregon, adhdmidlife, adhdtechworkers, dopamine, AI, chatgpt, techaddiction, screenaddiction, executivefunction, portlandtech, BCBSregenceoregon, beavertontech, intelmentalhealth, niketechworkers