When the Builder Breaks: AI Fatigue From the Inside

When the Builder Breaks: AI Fatigue From the Inside

9 min read

When the Builder Breaks

Siddhant Khare builds AI agent infrastructure for a living. He maintains OpenFGA at the CNCF, ships MCP servers, created tools for agent authorization and context deduplication. He is not a casual user. He is the person who builds the plumbing.

He shipped more code than ever before. He felt more exhausted than ever before.

In February 2026, he wrote about it. The title: “AI Fatigue Is Real and Nobody Talks About It.” Sixteen minutes of an infrastructure engineer describing, in precise detail, how the tools he builds broke him.

A month later, a developer named aziz_sunderji posted on Hacker News: “I think I’m addicted to Claude Code. All I want to do all day is explore ideas using data. I worry I’ll look back ten years from now and question my time use.”

Thirty people replied. Most of them recognized the pattern.


The Productivity Paradox

Khare’s core observation is counterintuitive. Tasks got faster. Individual problems that took three hours now took 45 minutes. But he didn’t get more rest. He got more problems.

“AI reduces the cost of production but increases the cost of coordination, review, and decision-making. And those costs fall entirely on the human.”

Before AI, he spent full days on single design problems with deep focus. After AI, he handled six problems daily through rapid context-switching. The bottleneck moved. It didn’t shrink.

Worse: the type of work shifted. Creation became evaluation. And those two activities feel completely different inside your head. Generation produces flow. Evaluation produces decision fatigue. You can generate for hours and feel energized. You can evaluate for hours and feel hollow.

Khare puts it plainly: AI-generated code requires more careful review than human-written code because the patterns are unpredictable. You can’t skim it the way you skim a colleague’s code whose habits you know. Every review demands full attention.

Six problems a day. Each demanding full attention. That’s not productivity. That’s a recipe for what BCG researchers called “AI brain fry”.

”I’m Not Sure I Want to Go Back”

On Hacker News, the responses split into two camps. Neither is reassuring.

Camp one: normalization.

“Maybe the addiction is not to Claude but to productivity, and Claude is just an enabler. (And maybe the productivity is an illusion, time will tell.)”

— saulpw

“I wish my other addictions were only $200/mo.”

— schmookeeg

Camp two: recognition.

“I am a 15+ software developer, and I am not sure if I want to keep working in the same way I used to if claude code suddenly disappears one day.”

— erdemo

Read that again. A developer with fifteen years of experience is uncertain whether they could return to working without an AI tool. That’s not enthusiasm. That’s dependency speaking — the quiet kind that sounds like preference.

Another commenter described the scope creep:

“I have Claude Code maintaining my Obsidian Vault, managing my Home Assistant setup via SSH, helping me buy life insurance and file my taxes…”

— dimitri-vs

The tool crossed from code into life management. A third commenter reported watching a colleague on a company retreat — on a bus ride to dinner, phone out, running a Claude Code session.

And then the comment that cut through everything:

“This community is obsessively pro-AI. Asking here is the equivalent of asking the guy who has sat at the slot machine next to you for the past three hours if he thinks you have a gambling problem.”

— lowsong

The Six Dimensions, Two Sources

Khare’s article and the HN thread describe the same patterns through different lenses. One is introspective analysis. The other is real-time peer confession. Together, they map cleanly onto the OnTilt framework.

Loss of Control. Khare’s “just one more prompt” trap — iterative refinement with diminishing returns. The HN thread’s OP spending all day exploring ideas without output. The gap between intended and actual usage.

Session Escalation. Khare describes the nondeterminism problem: same prompts, different results, creating a “constant, low-grade source of stress” that fuels the urge to keep trying. Near-misses that feel close enough to justify one more attempt.

Dark Flow. The context-switching marathon. Six problems a day, each absorbing but none completing. sshine on HN describes it from the inside: “Claude Code gives me the courage to imagine that I’ll have actual progress on big things because it helps me maintain an overview and not get stuck on details or gas out.” The absorption is real. Whether it produces proportional output is the question nobody asks until later.

Operational Dependency. erdemo’s confession: unable to imagine working the old way. Khare’s thinking atrophy — months of outsourcing first-draft reasoning to AI degraded his ability to reason from scratch. At a whiteboard design review without AI, he struggled with concurrency problems he understood conceptually.

His analogy is precise:

“It’s like GPS and navigation. Before GPS, you built mental maps… After years of GPS, you can’t navigate without it.”

Anticipation Shift. The FOMO treadmill. Khare spent weekends evaluating new tools — Claude Code, Codex CLI, GPT-5.3, Gemini CLI, CrewAI, AutoGen, LangGraph — chasing 5% improvements while losing deeper expertise. The excitement of the new tool became the reward. The output became secondary.

Negative Consequences. Khare burned out in late 2025. Became indifferent to quality. The HN thread’s OP worries about looking back in ten years and finding nothing but charts. A colleague misses a team dinner for a Claude Code session.

What They Recommend

Khare’s interventions are specific and tested on himself. The HN thread adds a few more from the crowd. None require quitting anything.

The three-prompt rule. If AI doesn’t reach 70% usability within three prompts, write the solution yourself. Khare calls this the single rule that saved him the most time. It breaks the near-miss-to-retry spiral by imposing a hard cutoff.

Separate thinking time from AI time. Mornings for manual reasoning — sketching, whiteboarding, writing approaches by hand. Afternoons for AI-assisted execution. The first hour of the day without AI is non-negotiable. This directly combats thinking atrophy.

Time-box sessions. Khare uses 30-minute timers. The OnTilt Checklist recommends 90 minutes based on ultradian rhythm research. Pick a number. The number matters less than the boundary.

Log AI effectiveness for two weeks. Khare found a clear split: AI saves time on boilerplate, documentation, and test generation. AI costs time on architecture decisions, complex debugging, and codebase-specific work. Knowing which is which changes how you reach for the tool.

Accept 70%. Stop chasing perfect output. Take the 70% the AI gives you and write the rest yourself. Perfectionism plus probabilistic output equals an infinite refinement loop. The 70% rule breaks it.

Go deep on one tool. Stop evaluating new platforms every weekend. Pick one AI coding assistant and learn it thoroughly. kylecazar on HN frames it well: “You probably aren’t addicted to CC, I suspect you are just hopping from idea to idea too quickly because these new tools allow for it.”

Set an ambitious, bounded goal. lopatin on HN: “Set an ambitious goal that is achievable using Claude Code, and focus on delivering it.” Open-ended exploration feels productive but produces nothing shippable. A goal creates a finish line. Dark flow needs a finish line.

Focused review, not total review. You cannot rigorously review every line of AI-generated code at scale. Khare’s solution: focused review on security, data handling, and error paths. Automated testing for everything else. This is pragmatic, not reckless — it’s triage.

The Builder’s Paradox

Khare’s story has an ironic twist. His burnout period — late 2025, when he became indifferent to quality — produced his best work. The exhaustion forced him to see broken problems clearly. He built Distill for deterministic context deduplication. He created agentic-authz for agent authorization. He started AgentTrace for observability.

Breaking down clarified what needed building.

Simon Willison, who has observed this pattern in dozens of developers, offers the most calibrated take on the HN thread:

“I wouldn’t worry about it just yet — this is all very novel, and there’s a lot of excitement involved in figuring out what it can do. If you’re still addicted to it in three months time I’d start to be concerned. For the moment though you’re building a valuable mental model.”

Three months. That’s a reasonable window. The question is whether you’ll notice when the window closes.

The GPS analogy applies. Using GPS for a road trip is practical. Using GPS to navigate your own neighborhood is a signal. The tool didn’t break your sense of direction. You stopped exercising it. The difference is hard to spot from inside.

Khare’s conclusion: “The real skill isn’t prompt engineering, model selection, or workflow optimization. It’s knowing when to stop.”

The developers who thrive with AI won’t be the ones who use it the most. They’ll be the ones who keep the ability to work without it — and choose when to reach for it.


Where do your patterns concentrate? Take the OnTilt Self-Check — 14 questions, 3 minutes, anonymous. It surfaces what you might not be tracking.


Sources:

  • Khare, S. (2026, February 8). “AI Fatigue Is Real and Nobody Talks About It.” siddhantkhare.com
  • aziz_sunderji. (2026, July). “Addicted to Claude Code–Help.” Hacker News. Discussion thread. Users cited: saulpw, schmookeeg, erdemo, dimitri-vs, jdorfman, lowsong, sshine, kylecazar, lopatin, simonw.
  • Kellerman, G.R. & Kropp, M. (2026). “AI Brain Fry” study. Harvard Business Review / Boston Consulting Group.
  • Csikszentmihalyi, M. (1990). Flow: The Psychology of Optimal Experience. Harper & Row.
  • Schüll, N.D. (2012). Addiction by Design: Machine Gambling in Las Vegas. Princeton University Press.

OnTilt is a research project studying behavioral patterns in AI-assisted work. The quiz is a self-check tool, not a diagnostic instrument. Read more on our About page.