Your AI Tool Didn't Steal Your Flow State — You Traded It

Your AI Tool Didn't Steal Your Flow State — You Traded It

7 min read

Your AI Tool Didn’t Steal Your Flow State — You Traded It

Two developers. Same feature. Same codebase.

Developer A uses Claude Code for four hours. Prompts fly. Code streams in. Tests pass on the third try. She feels amazing. The feature ships.

Developer B opens a blank file. Two hours of manual coding. No suggestions. Same feature ships.

Developer B was more productive. Half the time, same result.

Developer A felt more productive. By a mile.

That gap — between feeling productive and being productive — is where the problem lives.

What Flow Actually Is

Csikszentmihalyi spent decades studying “optimal experience.” Flow state. Work stops feeling like work. It becomes absorbing, rewarding, effortless-seeming effort.

He identified specific criteria. Clear goals. Immediate feedback. Challenge-skill balance. Loss of self-consciousness. Time distortion. And one more: a sense of control. Not control over outcomes. Control over actions. You decide what happens next.

AI coding tools satisfy most of these criteria well.

Clear goals? You have a prompt. Immediate feedback? Code streams in within seconds. Challenge-skill balance? The AI adjusts to your level. Time distortion? Ask anyone who’s done a four-hour session that felt like forty minutes.

But that last criterion — sense of control — breaks the illusion.

Dark Flow

Gambling researchers coined “dark flow” for a specific state. Slot machine players enter something that looks like flow. They’re absorbed. They lose time. They feel engaged. But they aren’t making meaningful decisions. The machine drives. They ride along.

The defining difference: in real flow, you choose your next action. In dark flow, you react to what’s presented.

Watch yourself during an AI coding session. The tool suggests a function. You read it. Accept, modify, or reject. The tool suggests the next piece. Accept, modify, reject. Again.

You’re engaged. Time disappears. But rewind the tape. Count the moments where you initiated an action versus responded to one. Count the moments where you decided what to build versus evaluated what was built for you.

That ratio tells you who was driving.

Our quiz measures this as “Dark Flow / Immersion” — one of six dimensions. Immersion isn’t bad. Immersion without agency is consumption dressed as creation.

The Agency Test

A concrete experiment. Try it during your next session.

Before the AI generates its next suggestion, pause. Cover the output. Write down — on paper — what you’d do next. What function you’d write. What approach you’d take. The next three lines.

Uncover the AI’s output. Compare.

If the suggestion surprises you — an approach you hadn’t considered, a problem you hadn’t identified — the AI was driving. You were a passenger who felt like a pilot.

If the suggestion matches your plan — same approach, same structure — you were driving. The AI was a power tool in your hands.

Do this ten times. Count the mismatches.

Most developers find the results uncomfortable. Not because the suggestions are bad. Because the surprise-to-match ratio reveals how much was their thinking versus watching someone else think.

A high surprise ratio isn’t an AI failure. It’s data about who does the cognitive work.

Deep Work vs. AI Work

Cal Newport defines deep work as “professional activity performed in distraction-free concentration that pushes cognitive capabilities to their limit.” The output creates new value. The activity improves your skill.

That last part matters. Deep work makes you better at what you’re doing. It builds expertise through deliberate practice. It compounds.

AI-assisted work fragments concentration into micro-decisions. Accept this suggestion. Reject that one. Modify this signature. Each decision takes seconds. Each is too small for deep concentration. The rhythm: stimulus, evaluation, response.

A rhythm analogous to email triage. Social media scrolling. Slot machines.

You can ship code in this mode. Features work. Tests pass. But the cognitive muscle you exercise is evaluation, not generation. You get better at judging code, not writing it. You train taste, not craft.

Newport would call this shallow work in deep work’s clothes. Time blocks look the same on your calendar. The fatigue feels similar — sometimes worse, because constant micro-decisions drain in their own way. But the skill development curve is flat.

Six months of AI-assisted coding makes you better at prompting. Six months of manual coding makes you better at coding. Both are skills. Not the same skill.

The Dependency Gradient

This isn’t binary. The question is where you draw the line. And whether you draw it consciously.

Architecture decisions require sustained, original thought. Your brain needs to hold multiple abstractions at once, test them against each other, synthesize something new. AI tools fragment this process. Every suggestion interrupts. Every interruption collapses the mental model you were building.

Implementation — translating clear design into working code — benefits from AI assistance. Design decisions are made. Architecture is set. Now you need syntax, API calls, boilerplate, edge cases. AI shines here without extracting the cost of lost agency.

A ratio that works: design 20% of your time without AI. Implement 80% with it.

Not because AI can’t help with design. Because your brain can’t do deep design work when something keeps handing you answers before you finish the question.

Reclaiming Real Flow

Four practices. All simple. None require quitting anything.

AI-free architecture blocks. When designing a system — choosing patterns, defining interfaces, mapping data flow — close the AI tool. Open a blank document or whiteboard. Think. The discomfort in the first five minutes is withdrawal from constant stimulation. It passes. What comes after is actual flow.

Design before implementation. Write your approach before you prompt. A paragraph is enough. “This component takes X, transforms through Y, outputs Z. The tricky part is the edge case where…” Now open the AI tool. Now you’re the driver.

The daily agency journal. End of each workday, answer one question: “What did I decide today?” Not “what did I ship.” What decisions did you make? If the answer is mostly “accepted or rejected AI suggestions” — that was evaluation, not creation.

Deliberate alternation. Some tasks with AI. Some without. Not punishment. Training. A musician who only plays with accompaniment never holds tempo alone. A developer who only codes with AI never holds architecture alone. You need to play solo.

The Uncomfortable Truth

Nobody stole your flow state. No tool does this to you. You’re making a trade. Examine the terms.

You trade agency for speed. Skill development for immediate output. The discomfort of not knowing for the comfort of suggestions. Each trade is rational in the moment. Accumulated over months, they reshape what kind of developer you are.

The question isn’t whether AI tools are good or bad. The question is whether you make these trades consciously. Or whether you’ve stopped noticing them.

Dark flow feels like productivity. It looks like productivity. It produces artifacts that resemble productivity. But when the tool goes down — and tools always go down — you discover what skills you kept and what you traded away.

Check Your Pattern

Dark Flow is one of six dimensions in our self-check quiz. The others: loss of control, session escalation, operational dependency, negative consequences, anticipation shift. Together they map a behavioral profile — not a diagnosis, a mirror.

Take the self-check. 14 questions. 3 minutes. The results might confirm what you suspect. Or they might surprise you.

Either way — data beats a feeling.


OnTilt is a research project studying behavioral patterns in AI coding tools that may parallel mechanisms described in addiction research. The quiz is a self-check tool, not a diagnostic instrument. Read more on our About page.