AI Psychosis: The Growing Risk of Blind Trust in AI
Mitchell Hashimoto warns of 'AI psychosis' where companies blindly outsource decision-making to AI, a view echoed by HN commenters and sparking debate on the future of engineering.
Mitchell Hashimoto, co-founder of HashiCorp, recently posted a tweet that resonated far beyond his followers: "I believe there are entire companies right now under AI psychosis." The phrase gathered over 600 points and 280 comments on Hacker News. It's not about AI being useless—it's about the uncritical, wholesale outsourcing of thinking to machines.
What Is AI Psychosis?
Hashimoto's tweet doesn't elaborate, but the HN discussion makes it clear. He's pointing at companies that treat AI as an oracle—prompting ChatGPT or similar for strategic decisions, architectural judgments, and production code without human oversight. One commenter distilled it perfectly:
"I'm pretty sure he's talking about companies and people outsourcing their decision making and thinking to AI and not really about using AI itself."
This isn't a Luddite rant. Hashimoto and the community distinguish between using AI as a tool and abdicating responsibility. The problem is the latter: relying on AI to replace human judgment, especially in areas where it's demonstrably weak—like novel reasoning or system-level understanding.
Why the Hacker News Community Is Sounding the Alarm
The thread isn't just agreement; it's a gallery of horror stories and thoughtful concern. One commenter described a colleague who migrated a database with reckless confidence, equating it to "pouring gasoline on the servers while smoking a cigarette." The metaphor captures the danger of AI-aided operators who lack foundational knowledge.
Another commenter predicted a new industry: "AI rescue consulting is going to become a significant mode of high value consulting, similar to specialists who come in to deal with a security breach or data recovery." This resonates because AI-generated systems can grow incomprehensibly tangled. The thread also highlights a defense mechanism from AI proponents: "the agents will fix it fast." One user observed that when you point out bugs, the reply is "but the agents are so fast!" That circular logic is the core of the psychosis.
The Twofold Danger: Systemic and Cultural
The psychosis isn't about AI adoption; it's about substituting thinking with prompting. We're seeing a new kind of cargo cult: companies hire "prompt engineers" to build entire backends, then wonder why they collapse.
First, the systemic danger: AI-written code scales complexity without human understanding. As complexity rises, the defect close rate drops, and token burn per defect skyrockets. Eventually, every AI change introduces more bugs than it fixes—a death spiral. Second, the cultural danger: it devalues real expertise. An engineer who understands why a migration failed may be replaced by someone who just asks the AI again.
But there's a more insidious effect: the erosion of learning. When you skip the struggle of debugging, you never build mental models. The AI becomes a crutch that atrophies the muscle of reasoning. Over time, the company becomes dependent on a black box it can't maintain or improve.
A Litmus Test for Responsible AI Use
If you're building software with AI, here's a practical test: can you explain every line of code the AI generated? If not, you're not building—you're assembling LEGO blocks that might snap.
Consider an AI-generated function to handle user authentication:
def login(request):
user = User.query.filter_by(email=request.form['email']).first()
if user and check_password_hash(user.password, request.form['password']):
session['user_id'] = user.id
return redirect('/dashboard')
return abort(401)
This looks fine, but what about timing attacks? The prompt probably didn't mention them. An experienced developer would use compare_digest for password comparison. AI won't unless explicitly asked. The lesson: never treat AI output as final.
For companies, the fix is process. Treat AI-generated code like a junior developer's pull request: review it, test it, and force explanations. Maintain ownership. If you can't understand your codebase without AI, you've lost control.
The Future: Rescue Consulting or Skepticism Hygiene?
If you're a solo founder shipping features with AI to hit a deadline, you're at risk—but you likely already know it. If you're a tech lead or CTO, you should be terrified. The teams that survive will develop a hygiene of skepticism: prompt, review, reason, own. Everyone else will be paying for AI rescue consulting within two years. Ignore this at your codebase's peril.
Read the full Hacker News discussion here. See Mitchell Hashimoto's original tweet here. For background on AI code quality, see this analysis.