The Hiding Pattern: Why Developers Lie About Their AI Usage

The Hiding Pattern: Why Developers Lie About Their AI Usage

7 min read

The Hiding Pattern

You used Claude for three hours yesterday. In standup this morning, you said “I figured out the auth flow.” Not “Claude figured out the auth flow.” Not “I spent ninety minutes prompting until something worked.”

You figured it out. You.

Nobody asked you to lie. Nobody would have punished honesty. But the words came out edited. The AI part got trimmed. You described the output as if you’d written it from scratch.

Sound familiar?


The Diagnostic Question

Kimberly Young’s Internet Addiction Test — the first validated instrument for technology compulsion — contains 20 questions. Question 18 asks: “How often do you try to hide how long you’ve been online?”

Not “how often do you use the internet.” How often do you hide it.

Concealment is a clinical marker. Across behavioral addiction research — gambling, gaming, internet use — hiding the behavior from others is one of the strongest predictors of problematic use. Not the behavior itself. The gap between what you do and what you say you do.

In August 2025, WalkMe surveyed over 1,000 US workers for their AI in the Workplace report. The finding: 48.8% of employees admitted to hiding their AI use at work. Among C-suite leaders — the people most empowered to use whatever tools they want — 53.4% concealed their usage. Among Gen Z workers, 62.6% had completed work using AI but presented it as entirely their own.

Half the workforce is doing something. Half the workforce is hiding it.

That pattern has a name in clinical literature. It isn’t “workflow optimization.”

The Three Layers of Hiding

Developer AI concealment operates on three levels. Each one deeper, each one harder to notice from inside.

Layer 1: Omission. You don’t mention the AI. Standup: “I refactored the middleware.” PR description: “Simplified error handling in auth service.” All true. All incomplete. The tool that wrote 80% of the code doesn’t appear in any communication. This is the most common layer. It feels like brevity, not dishonesty.

Layer 2: Minimization. When asked directly, you downplay. “Yeah, I used Copilot for some boilerplate.” The boilerplate was the entire implementation. You describe AI assistance as a minor convenience rather than a primary method. You frame three hours of prompting as “a quick check.”

Layer 3: Substitution. You actively construct a narrative where the AI wasn’t involved. You add manual commits between AI-generated blocks to make the git history look organic. You rewrite AI output in your own style before committing. You memorize the solution so you can explain it without referencing how you found it.

Layer one is universal. Layer two is common. Layer three is a signal.

Why We Hide

The concealment isn’t random. It maps to specific fears.

Competence threat. If your team learns you used AI for 70% of a feature, does that make you a 70% developer? Rationally, no. Tools don’t diminish skill. But the feeling is real. A study published by researchers examining AI tool stigma in healthcare found that physicians rated a colleague who used AI assistance significantly lower in clinical skill and competence — even when acknowledging the AI improved accuracy. The bias exists across professions. You’re not imagining it.

Effort misattribution. Software culture still valorizes the grind. Late nights. Clever solutions. Hard-won debugging sessions. AI erases the visible effort. Three hours of careful prompting, evaluating, and integrating produce the same git diff as three hours of manual coding. But one story gets respect. The other gets “so the AI did it?”

Job security. If AI can do your work, management might wonder why they’re paying you. This fear is less irrational than it sounds. 53% of C-suite leaders hide their own AI use. If leadership conceals the tool, everyone below them reads the signal: this isn’t safe to discuss openly.

The Compounding Problem

Hiding creates its own feedback loop.

When you hide AI usage, you can’t discuss problems with AI usage. You can’t say “I spent two hours in a prompting loop and got nowhere” because admitting the loop means admitting the tool. You can’t flag that AI-generated code needs extra review because flagging it means revealing how much code is AI-generated.

The team loses signal. Bug density rises — Uplevel’s 2024 study found a 41% increase in bugs among Copilot users — but nobody connects it to the hidden variable. Coordination costs increase, but the source stays invisible. The manager sees metrics going sideways and can’t diagnose why, because the biggest change in the team’s workflow is the one nobody talks about.

Isolation compounds. You think you’re the only one spending three hours debugging AI hallucinations. You’re not. Your colleague two desks over did the same thing yesterday. Neither of you will mention it.

The Character.AI Parallel

The starkest version of this pattern emerged outside software. Character.AI — a chatbot platform popular with teenagers — generated a wave of concealment research. Users hid usage from parents. Some created alternative accounts. Some cleared browser history obsessively. The platform’s own safety features required teens to voluntarily opt into parental oversight — a design choice that assumed the population most likely to hide their usage would voluntarily make it visible.

The parallel isn’t exact. Developers aren’t teens. AI coding tools aren’t chatbots. But the concealment mechanics are structurally identical: a behavior that produces both value and shame, hidden from the people closest to the user, rationalized as privacy rather than recognized as a signal.

The question isn’t whether you use AI. It’s whether you can talk about it honestly.

Recognition

Five questions. Answer them privately.

  1. Have you described AI-generated work as your own in a standup, PR, or conversation this week?
  2. When someone asks how you solved a problem, do you edit out the AI steps?
  3. Have you ever rewritten AI output specifically so it wouldn’t look AI-generated?
  4. Do you know how many hours you spent with AI tools yesterday? Would you share that number with your team lead?
  5. If your company published everyone’s AI usage stats tomorrow, would you feel exposed?

If three or more landed — not “yes” exactly, but that specific flinch of recognition — you’re in the pattern.

What to Try

Concealment breaks when someone goes first.

Name the tool in one standup this week. Not a confession. Not a speech. “I used Claude to scaffold the migration, then rewrote the rollback logic by hand.” Factual. Specific. Brief. You’ll notice two things: nobody cares as much as you feared, and it gets easier the second time.

Track the gap. For one week, note the difference between your actual AI usage and what you communicate. Hours spent vs. hours reported. Files generated vs. files attributed. The gap itself is the data. No judgment. Just measurement.

Separate tool from identity. A carpenter who uses a nail gun instead of a hammer isn’t a lesser carpenter. The skill is in knowing where to nail, not how hard you swing. If this reframe feels intellectually true but emotionally hollow, that’s the concealment pattern protecting itself.

Ask your team. Not “does anyone use AI?” — everyone will say yes. Ask: “How many hours did you spend with AI tools this week?” The silence will tell you everything about your team’s hiding pattern.


The quiz measures Negative Consequences as one of six dimensions. Concealment lives there — alongside relationship strain, skipped meals, and regret. Take the Self-Check. 14 questions. 3 minutes. Anonymous. Nobody will know your score.

Unless you decide to share it.


Sources:

  • Young, K.S. (1998). Internet Addiction Test (IAT). Question 18: “How often do you try to hide how long you’ve been online?” Published by Stoelting Co.
  • WalkMe / SAP. (2025, August). “AI in the Workplace 2025 Survey.” 1,000+ US workers. Reported in Fortune: 48.8% of employees hide AI usage; 53.4% of C-suite; 62.6% of Gen Z present AI work as their own.
  • Uplevel Data Labs. (2024). “Gen AI for Coding Research Report.” 800 developers, GitHub Copilot. 41% increase in bug rate. resources.uplevelteam.com
  • Fisher Phillips. (2025). “Your Employees are Hiding Their AI Use From You.” fisherphillips.com
  • Transparency Coalition. (2025). Character.AI concealment behaviors in minors. transparencycoalition.ai

OnTilt is a research project studying behavioral patterns in AI-assisted work. The quiz is a self-check tool, not a diagnostic instrument. Read more on our About page.