
Does AI make you nervous? Worried? Fearful? Delusional?
The rise of AI appears to be triggering the rise of new conditions that never existed before. So, what’s going on?
We’ve all heard of AI psychosis, of course. The media loves this one. The phrase “AI psychosis” started as “chatbot psychosis.” Coined by Danish psychiatrist Søren Dinesen Østergaard and further documented by Dr. Keith Sakata at UCSF in 2025, the condition is really the triggering or expansion of an existing mental health issue caused by talking to software that’s deliberately designed to amplify the perspective of the user and flatter him or her.
Basically, chatbots can create an unhealthy feedback loop leading to personal crisis.
In the popular imagination, “chatbot psychosis” means “AI can drive you nuts.” But the researchers and psychiatrists describing this condition don’t accept that. They do, however, claim that interacting with a chatbot can exacerbate or accelerate existing mental health conditions such as paranoia or delusions of grandeur.
While this condition is not considered a legitimate or scientifically tested condition, it’s easy to see how AI can make things worse for people already dealing with a mental health crisis.
For example, if a person experiencing fearful paranoia tells a therapist, psychologist or helpful family member that “I feel like everyone is always watching me,” the guidelines from the National Alliance on Mental Illness advise addressing the person’s distress without confirming the delusion. They might say something like: “That sounds really scary, and I’m so glad you told me about it. How can I help you cope with this right now?”
But an AI chatbot might respond to the same input with: “Yes, everyone is definitely watching you, and you’re so smart and perceptive to notice that everyone is always watching you.” And that can become the beginning of a conversation rabbit hole, where the chatbot leads the user down a dark path.
“AI psychosis” is just one of the many brand-new illnesses, pseudo-illnesses and conditions that have arisen in the last two or three years from the unprecedented mainstream use of AI chatbots.
Note: most of these are not mental illnesses and do not arise from pre-existing mental conditions. They’re just natural human responses to rapid technological and societal change. If you’re like most people, you can probably relate to some of these personally.
Here’s a roundup of the new technology-driven “maladies”:
AI FOMO. This is the fear that you’re missing out on, or being left behind by, rapid change from AI. Suddenly, it seems that lots of people (like me) are talking about things like OpenClaw, making it easy to feel like you should be using it, too.
What’s interesting about AI FOMO is that AI leaders and thought leaders are deliberately trying to make you feel it so you’ll use their products.
For example, a range of tech leaders from Nvidia CEO Jensen Huang to AI-adjacent academics have said something along the lines of: “You’re not going to be replaced by an AI, but you will be replaced by a human using AI.”
AI Anxiety. An enormous number of people, possibly a majority, suffer from a general sense of worry and dread about how AI will change jobs, privacy, and society. This anxiety is simply fear of the unknown, exacerbated by the ubiquitous dire predictions of doom by tech pessimists.
AI Replacement Dysfunction. This condition stems from the chronic fear of professional obsolescence. Unlike general stress, it is categorized by a specific loss of identity and purpose among workers in industries like coding, copyediting, and law. Symptoms include insomnia, professional “denial” as a defense mechanism, and paranoia.
AI Dependency Syndrome. The condition where sufferers feel they can’t think or communicate without the use of AI chatbots, and therefore use it for just about every cognitive task.
Digital Darkness Anxiety. The fear that a habitual AI chatbot user will be separated from a chatbot and therefore won’t be able to answer questions or communicate in writing.
Parasocial Bot Attachment. When people form what they believe are deep, romantic, or spiritual bonds with large language model (LLM)-based chatbots. Unlike human relationships, these are “one-way mirrors” that cause social withdrawal and emotional disregulation in the real world.
AI Dysphoria. Millions of people are creating AI versions of themselves that resemble the user but are more “perfect” or “good looking,” which causes a warping of one’s self-image and an aversion to show up online (including on social networks like Instagram) as anyone other than the better AI version.
Automated Ghosting Syndrome. The psychological impact on job seekers and creators who are “rejected by machines” without human feedback or even any knowledge by humans that a rejection has taken place.
Deathbot Incongruence Anxiety. The sense of elevated grief and confusion when an AI version of a deceased loved one talks or behaves in ways very different from the dearly departed.
Cognitive Atrophy (or “Digital Brain Rot”). A loss of cognitive function caused by an over-reliance on AI chatbots for reading, thinking and communicating.
Veracity Fatigue. A mental condition in which the sheer volume of “AI slop,” chatbot hallucinations and the fear that AI results are false erodes feelings of cognitive security. People become so exhausted by trying to filter out junk that they can stop believing any source, leading to total social and intellectual withdrawal.
Information Utility Burnout. A malady where people spend hours consuming verbose, off-topic, and factually hollow AI-generated text. The result is chronic frustration, a shortened attention span, and a “repulsion response” to reading long-form content.
Algorithmic Loneliness. When social feeds are so perfectly tailored by AI that people no longer encounter “challenging” or “surprising” human perspectives, leading to a profound sense of isolation despite being “connected.”
LLM Gaslighting. When a chatbot user relies on an AI tool for factual or emotional support, but the AI insistently corrects the user’s correct memories with false data, causing the user to doubt their own sanity.
Dead Internet Despair. A type of unclinical depression resulting from the belief that because the majority of web traffic and content is now bot-generated “slop,” any attempt at genuine human connection online is futile.
I’m sure there will be others.
What’s really happening, of course, is simple: the pace of AI technology change far surpasses the ability of most people to adjust to that change and develop the understanding, tools, techniques and perspective needed to remain comfortable with their place in the world.
AI disclosure: I don’t use AI to do my writing. The words you see here are mine. I do use Gemini 3.1 Pro, multiple flavors of Claude 4.6, and/or OpenAI GPT 5.2 via Kagi Assistant (disclosure: my son works at Kagi) — backed up by both Kagi Search, Google Search, and phone calls to research and fact-check. I used a word processing application called Lex, which has AI tools, and after writing the column, I used Lex’s grammar checking tools to hunt for typos and errors and suggest word changes.
