The OnTilt Checklist: 12 Practices for Healthy AI-Assisted Work

The OnTilt Checklist: 12 Practices for Healthy AI-Assisted Work

8 min read

The OnTilt Checklist: 12 Practices for Healthy AI-Assisted Work

You brush your teeth. You sleep 8 hours. You exercise.

What do you do about your AI habits? Nothing.

That changes now.

Six dimensions below. Each from behavioral addiction research — the same frameworks used for gambling, gaming, and social media. Each dimension gets two concrete practices. No theory. No moralizing. Just things you can start today.


Loss of Control

The defining feature of behavioral addiction: you intend to stop, but you don’t. “Just one more prompt” turns into 90 more minutes. Two practices that attack the mechanism directly.

1. The Session Timer

Set a 90-minute timer before opening your AI tool. When it rings, stop. Not “after this prompt.” Not “let me finish this thought.” Now.

Why 90 minutes? Ultradian rhythm research — your brain’s natural focus-rest cycle. But the real reason is simpler: if you can’t stop when a timer fires, that inability is the problem. The timer surfaces whether you have control.

Use a physical timer. Put it across the room. Getting up to silence it creates a micro-decision point. That moment — standing, walking — is where you reclaim agency.

2. The Pre-Session Contract

Before opening your AI tool, write down one sentence. What you need to accomplish. “Implement login form validation.” “Fix the flaky database test.” “Refactor the payment module.”

Task done? Close the tool. Walk away.

“Exploring” is not a task. “Seeing what it can do” is not a task. “Trying a few things” is not a task. These are verbal patterns of open-ended sessions. Dark flow lives there.

Write the contract on paper. Paper resists the “well, I also want to try…” drift better than a mental note.


Session Escalation

Tolerance is a hallmark of addiction. The same dose stops working, so you increase it. Sessions get longer. Prompts get more elaborate. “Quick questions” take an hour. Two practices that make escalation visible.

3. The Near-Miss Protocol

Your code almost works. One or two tests fail. The fix feels close. Maybe the next prompt nails it.

Stop. Step away for 10 minutes.

That urge to immediately retry is the near-miss effect. Gambling researchers studied it for decades. Two cherries and a lemon feels closer to a win than three lemons — both are losses. Your 14-out-of-16-passing test suite is two cherries and a lemon.

Distance breaks the cycle. After 10 minutes, you approach the problem differently. Often you see the fix without another prompt. The near-miss effect exploits urgency. Remove urgency, remove the exploit.

4. Session Length Tracking

For one week, log start and end time of every AI session. Don’t change behavior. Just record.

End of the week, look at the numbers. Monday’s 30-minute session becomes Wednesday’s 90-minute session. Thursday’s casual question becomes Friday’s three-hour deep dive.

If durations increase, that’s tolerance. You’re escalating. The same amount of AI interaction no longer satisfies.

Measurement changes behavior. The moment you start logging, you start noticing. That’s the point.


Dark Flow

Flow is productive. Dark flow is its shadow. Both share one trait — time disappears. But they differ in one critical way. In flow, you produce output. In dark flow, process consumes you. Streaming tokens feel productive. Rapid prompting feels like momentum. But when you surface hours later, the output is minimal.

5. The Hunger Check

Set a 2-hour alarm. When it rings, ask three questions:

Am I hungry? Am I thirsty? Do I need the bathroom?

If any answer is yes, you were in dark flow. Productive flow doesn’t suppress basic physical signals for hours. Dark flow does. Same dissociative absorption gaming researchers document — the body’s needs vanish from awareness.

Embarrassingly simple. That’s why it works. You’re not analyzing cognition. You’re checking whether you forgot to drink water. The body doesn’t lie.

6. The Goal Test

Every 30 minutes, say your current goal out loud. Not in your head. Out loud.

One sentence — “I’m building authentication middleware” — means productive flow. Keep going.

Stumbling? Bouncing between three things? Goal is vaguely “improving the code”? That’s dark flow. Streaming output seduced you away from your objective.

Speaking out loud is deliberate. Internal monologue is easy to fake. Your voice doesn’t cooperate with self-deception.


Operational Dependency

Dependency isn’t about frequency. It’s about necessity. You can use a tool daily without depending on it. You become dependent when you can’t function without it. Two practices test the difference.

7. No-AI Fridays

One day per week, code without AI. No Claude. No Copilot. No ChatGPT. Just you, your editor, and documentation.

Track two things: productivity and emotional response.

Productivity drops but you feel fine? No dependency. You use AI as a power tool. You’re competent without it.

Productivity drops and you feel anxious, disproportionately frustrated, unable to start tasks? That’s dependency. You outsourced a cognitive function and lost the ability to perform it alone.

Feels freeing — like a day without your phone? Strongest signal of good hygiene. The tool enhances you. It doesn’t define you.

8. The Outage Drill

Simulate an AI outage. Disable your tools for 2 hours. No announcement. No prep. Turn them off mid-morning on a workday.

Notice your emotional response.

Mild frustration? Normal. You lost a useful tool. Proportionate.

Anxiety? Panic? Compulsive urge to check if the service is back? Disproportionate emotional response to tool unavailability is a clinical marker of behavioral dependency. Not a metaphor. The same marker.

The drill isn’t punishment. It’s a stress test. You test backups, deployments, disaster recovery. Test your cognitive independence too.


Negative Consequences

Every tool has costs. The question is whether you see them. Two practices that make hidden costs visible.

9. The Sleep Audit

For two weeks, track two things: daily AI usage and bedtime.

Compare AI-heavy days to AI-light days. Heavy days push bedtime later — 20 minutes, 40 minutes, an hour. The tool costs you sleep. Sleep loss cascades into decision-making, emotional regulation, health, relationships.

Often the first negative consequence people notice. Not the worst one. The easiest to measure. A clock doesn’t have opinions.

10. The Relationship Check

Ask someone close to you — partner, roommate, friend — one question:

“Have you noticed me being more distracted when coding lately?”

Their answer matters more than yours. Self-report is unreliable for behavioral addiction. The person across the dinner table while you mentally compose your next prompt has data you don’t.

Uncomfortable? That’s information too. If the question feels threatening, ask yourself why.


Anticipation Shift

Healthy use means valuing the outcome. Unhealthy use means valuing the interaction itself. Streaming output becomes the reward. The prompt-response loop becomes the point. Two practices that recalibrate what you optimize for.

11. Output Over Process

After each AI session, evaluate the output. Not the process. Not how elegant the prompts were. The output.

Did you ship a working feature? Fix a real bug? Complete a meaningful task?

Yes? The session was productive regardless of how it felt.

No? If you spent two hours and mainly remember how cool the suggestions were — you optimized for process. The streaming text was the reward, not the result.

Addictive loops feel productive. They always do. The gambler feels like they’re “figuring out the system.” The AI user feels like they’re “making progress.” Feelings aren’t data. Output is.

12. The Promptless Hour

One hour. Write code without AI prompts. Not a test. Not punishment. Just code the way you used to.

Notice how it feels.

Normal — slower, maybe, but fine? Your relationship with AI is healthy. You appreciate the tools without needing them.

Boring? The lack of streaming output and rapid suggestions makes work feel flat? That’s the contrast effect. Your baseline for “coding” shifted. The tool didn’t make you more productive. It made everything else less stimulating.

The contrast effect is reversible. But you have to notice it first.


Start Small

Don’t adopt all 12 at once. Pick one from the dimension that concerns you most. Try it for a week. See what you learn.

These aren’t rules. They’re awareness practices. Like brushing your teeth — boring, daily, foundational.

Hygiene, not abstinence.


Want to know which dimensions need attention? Take the 3-minute self-check quiz. 14 questions. Anonymous. No email required.

OnTilt is a research project studying behavioral addiction mechanisms in AI coding tools. The quiz is a self-check tool, not a diagnostic instrument. Read more on our About page.