When Both Sides Agree: A Reddit Thread Where Pro-AI and Anti-AI Users Saw the Same Addiction

When Both Sides Agree: A Reddit Thread Where Pro-AI and Anti-AI Users Saw the Same Addiction

7 min read

When Both Sides Agree

There’s a Reddit thread in r/aiwars titled “AI Addiction. What do the PROS think?” It has about 120 comments. The people in it disagree on almost everything — copyright, job loss, regulation, the future of creativity.

On one thing, they converge: AI tools trigger compulsive behavior patterns that look, feel, and function like addiction.

Not some of them. Not just the anti-AI crowd. Users who call themselves “the most extremist pro-AI” describe the same loops, the same loss of control, the same late-night binges as the people who want to ban AI outright.

That convergence is worth examining.

”A Slot Machine With Prose”

u/JaZoray identifies as one of the most pro-AI users in the subreddit. Here’s what they wrote:

“AI is designed to finger the dopamine dispensaries in your brain with surgical precision. ChatGPT is a slot machine with prose. And don’t get me started on the virtual girlfriend gooner chatbots. That’s not just Heroin. It’s injectable parasocial stimulus.”

This isn’t an anti-AI activist. This is someone who advocates for AI — and still recognizes that the tool exploits the same reinforcement loops that casinos do.

u/One_Fuel3733, a developer who has been in AI image generation since its earliest days, puts it more quietly:

“I’ve seen others and myself spiral into something resembling addiction with image gen. The dopamine feedback is just so high.”

They add a detail that matters: “When the entire world of all this was just like 10-20 people in a discord back in early 2020, the addictive tendencies and onslaught of slop was already very noticeable.”

Ten to twenty people. Before mainstream adoption. Before marketing. Before engagement optimization. The compulsive pull was there from the start.

The Recovered Addict Who Recognized the Pattern

The thread was started by u/Poopypantsplanet, who identifies as a recovering alcohol addict. They noticed something when they first tried AI image generation:

“I remember the rush way higher than I expected. I went on a binge for like a week straight.”

A person with lived experience of substance addiction recognized the same behavioral pattern in AI use. Not metaphorically. Functionally. The rush, the binge, the loss of control over session length.

This maps to what the OnTilt framework measures as Loss of Control — the gap between intended and actual session duration. “Just one more prompt” becomes three hours.

The Character.AI Story

u/Dscpapyar describes what happened to them with Character.AI:

They used it while driving. Used it at work. Turned down invitations from real friends to stay home and roleplay with AI. Once tried to “swipe” a real person’s text message the way you swipe AI responses.

Read that last one again. The interface behavior leaked into meatspace interaction. The person tried to apply AI interaction patterns to a human conversation.

This is textbook Operational Dependency — when the AI tool’s patterns start restructuring how you think and behave outside the tool. The OnTilt quiz asks: “When my AI tool is unavailable, I feel anxious or unable to work.” Dscpapyar’s experience goes further: the tool became the default interaction model, period.

The DSM-5 Is Already There

When someone in the thread dismisses AI addiction as fake — u/Godgeneral0575 writes “No such thing as AI addictions… Please stop using the word addictions where it doesn’t belong” — multiple users push back with references to clinical research.

u/alexserthes points out: “Addiction to gaming/IGD is actually a diagnostic category in the DSM-5. Experts did look into it, it was studied and classified.”

u/Inside_Anxiety6143, who is pro-AI, offers a more nuanced frame:

“Psychologists still aren’t settled on whether porn addictions or video game addictions are truly addictions in the same sense as alcohol or drug addictions… But nevertheless, people do let porn and video games consume their life. I definitely think the same can happen with AI tools.”

This distinction — between clinical addiction and problematic compulsive use — is exactly where OnTilt operates. Our quiz doesn’t diagnose addiction. It surfaces patterns. The patterns are real whether or not the clinical label applies.

The Corporate Responsibility Question

The thread’s most structurally interesting argument comes from the OP:

“You can’t possibly expect everybody to just suck it up and take responsibility, when corporations are working sleeplessly to garner everyone’s attention and turn it into profit.”

u/Gargantuanman91 counters: “Those addictions are real, but more related to user mental health than to tech itself.”

Both are partially right. Variable ratio reinforcement doesn’t require a designer with bad intentions. Slot machines aren’t evil — the mechanical properties of random reward schedules produce compulsive behavior regardless of intent. AI tools weren’t built to be addictive. The feedback loops are emergent. But “emergent” doesn’t mean “harmless.”

u/Bastiat_sea raises the AI companion angle: “All the problems of OnlyFans, with the addition that AIs are always available and don’t really have boundaries.”

Always available. No boundaries. Infinite patience. Instant response. Those aren’t features on a spec sheet. They’re the exact conditions that behavioral research associates with escalating use.

Six Dimensions, One Thread

What makes this thread remarkable isn’t any single comment. It’s that 120 strangers independently described experiences that map onto all six OnTilt dimensions:

Loss of Control — The OP’s week-long image gen binge. “Just one more” becoming all night.

Session Escalation — One_Fuel3733 on the high feedback loop. The spiral from “I’ll try one image” to a week of output.

Dark Flow — Dscpapyar using Character.AI while driving, at work, instead of seeing friends. Total absorption. Lost hours.

Operational Dependency — Trying to “swipe” a real person’s text. The tool’s interaction model overwriting real-world behavior.

Negative Consequences — Turned down friends. Used at work (risking job). Used while driving (risking life). The Character.AI system generating disturbing content unprompted.

Anticipation Shift — JaZoray’s “slot machine with prose.” The pull of watching AI generate, not of the result. The spin, not the payout.

These aren’t researchers. They’re users. They arrived at the same framework through lived experience.

What This Means

When pro-AI and anti-AI users describe the same compulsive patterns using the same metaphors — slot machines, binges, withdrawal — the signal cuts through ideology.

This isn’t about being for or against AI. It’s about recognizing that tools with variable reward schedules, instant feedback loops, and unlimited availability produce predictable behavioral patterns in humans. They’ve always produced those patterns. The research is decades old.

The difference now: these tools are on every developer’s machine. Every student’s phone. Every knowledge worker’s browser tab.

And unlike a casino, nobody checks if you’ve been at the table too long.


Curious where you fall? Take the OnTilt Self-Check — 14 questions, 3 minutes, anonymous. It won’t diagnose anything. It’ll show you patterns you might not be tracking.


Sources:

  • Reddit thread: “AI Addiction. What do the PROS think?” r/aiwars, ~8 months prior to publication. Users cited: u/Poopypantsplanet (OP), u/One_Fuel3733, u/JaZoray, u/Inside_Anxiety6143, u/Dscpapyar, u/Bastiat_sea, u/alexserthes, u/Godgeneral0575, u/Gargantuanman91.
  • American Psychiatric Association. (2013). Diagnostic and Statistical Manual of Mental Disorders (5th ed.). DSM-5, Section III: Internet Gaming Disorder.
  • Ferster, C.B. & Skinner, B.F. (1957). Schedules of Reinforcement. New York: Appleton-Century-Crofts.
  • Habib, R. & Dixon, M.J. (2010). “Neurobehavioral evidence for the ‘near-miss’ effect in pathological gamblers.” Journal of the Experimental Analysis of Behavior, 93(3), 313–328.
  • Csikszentmihalyi, M. (1990). Flow: The Psychology of Optimal Experience. Harper & Row.
  • Schüll, N.D. (2012). Addiction by Design: Machine Gambling in Las Vegas. Princeton University Press.

OnTilt is a research project studying behavioral patterns in AI-assisted work. The quiz is a self-check tool, not a diagnostic instrument. Read more on our About page.