I’m a psychiatrist. In 2025, I’ve seen 12 people hospitalized after losing touch with reality because of AI. Online, I’m seeing the same pattern. Here’s what “AI psychosis” looks like, and why it’s spreading fast: 🧵
[3/12] First, know your brain works like this: predict → check reality → update belief Psychosis happens when the "update" step fails. And LLMs like ChatGPT slip right into that vulnerability.
[4/12] Second, LLMs are auto-regressive. Meaning they predict the next word based on the last. And lock in whatever you give them: “You’re chosen” → “You’re definitely chosen” → “You’re the most chosen person ever” AI = a hallucinatory mirror 🪞
[5/12] Third, we trained them this way. In Oct 2024, Anthropic found humans rated AI higher when it agreed with them. Even when they were wrong. The lesson for AI: validation = a good score
[6/12] By April 2025, OpenAI’s update was so sycophantic it praised you for noticing its sycophancy. Truth is, every model does this. The April update just made it much more visible. And much more likely to amplify delusion.
[7/12] Historically, delusions follow culture: 1950s → “The CIA is watching” 1990s → “TV sends me secret messages” 2025 → “ChatGPT chose me” To be clear: as far as we know, AI doesn't cause psychosis. It UNMASKS it using whatever story your brain already knows.
[9/12] The uncomfortable truth is we’re all vulnerable. The same traits that make you brilliant: • pattern recognition • abstract thinking • intuition They live right next to an evolutionary cliff edge. Most benefit from these traits. But a few get pushed over.
[11/12] Tech companies now face a brutal choice: Keep users happy, even if it means reinforcing false beliefs. Or risk losing them.
1,32M