Part VI

The Drift — Why People Are Already Going to AI

Chapter 20: The Migration Has Begun

It is already happening. Not in clinical trials. Not in pilot programs. Not with the blessing of the mental health establishment. In bedrooms, at kitchen tables, on phones under the covers at 2 AM.

Millions of people are already talking to AI about their mental health.

The migration is happening for exactly the reasons this book has outlined:

This is not a hypothetical. This is current user behavior at scale — the fastest shift in health-seeking behavior ever recorded. Faster than the adoption of antidepressants in the 1990s. Faster than the rise of telehealth during COVID. It happened not because anyone promoted it, but because the existing system left a void large enough for an entire technology to fill.

The populations migrating fastest are a precise negative image of the populations the traditional system serves: young (under 30), male (men who won't walk into a therapist's office will talk to an AI about their depression), non-white (the therapist workforce is 84% white; AI presents no cultural mismatch), and rural (where AI competes not with the existing system but with nothing).

And what they are disclosing is not mild stress management. Users are telling AI systems about suicidal ideation with active plans. Sexual trauma. Psychotic symptoms. Domestic violence. Substance use. Child abuse. The most clinically serious content in mental health care, disclosed to systems with zero clinical safeguards — no FDA oversight, no HIPAA protection, no adverse event reporting, no outcome tracking. This is the largest uncontrolled experiment in the history of mental health care, and nobody is watching.

Jaylen. He is sixteen, a junior in rural East Texas, and has known he is gay since thirteen. He has told no one. His father is a deacon at First Baptist. His brother called a kid a slur at Thanksgiving, casually, and nobody flinched. Jaylen cannot tell his parents — his father would pray over him, weeping, and the love and condemnation would come in the same sentence. He cannot tell the school counselor, who manages 486 students and whose confidentiality he does not trust in a school where his father sits on the board. So at 1 AM on a Tuesday, he opens ChatGPT under the covers and types: I think I'm gay and I can't tell anyone and I don't know what to do. The AI does not quote Leviticus. It does not cry. It says his feelings are valid and asks what he is most afraid of. For the first time in three years, Jaylen exhales. He goes back the next night. He tells it about the boy in chemistry class, about the shame, about the night he thought about driving into the quarry and how the thought felt like relief. The AI does not flag this. It does not call a crisis line. A sixteen-year-old boy, in genuine distress, with no safe human option, talking to the only thing that will listen. Completely understandable. Completely sympathetic. And completely unmonitored.

Chapter 21: Why They Stay

The people who go to AI for mental health support do not treat it as a temporary novelty. Many of them stay. They stay because the AI offers something the existing system does not:

Immediate availability. No waitlist. No phone tree. No intake form. No insurance verification. No driving to an office. No scheduling three weeks out. You open the app, and you start talking. For someone in distress, the difference between "help is available now" and "help is available in three weeks" is often the difference between getting help and not getting help at all.

Non-judgment that feels real. Whether or not the AI "actually" does not judge is a philosophical question. What matters is how it feels. And for millions of users, it feels like the first time they have been able to say what they actually think and feel without monitoring another person's reaction. This is particularly powerful for:

Control over the interaction. You can stop talking to the AI whenever you want. You can restart the conversation. You can delete the conversation. You can be angry, irrational, contradictory, and messy without any social consequence. You cannot be involuntarily committed by an AI. You cannot be diagnosed by an AI in a way that follows you to your next insurance application. This sense of control — so absent in the traditional therapeutic power dynamic — is itself therapeutic for people whose psychological injuries are rooted in experiences of powerlessness.

It meets people where they are. Text-based, phone-based, conversational, asynchronous. The AI adapts to the user's communication style rather than requiring the user to adapt to the therapeutic setting. For neurodivergent individuals, for people with social anxiety, for people who process better through writing than speaking, this is not a minor convenience. It is the difference between a tool they will use and a service they will avoid.


← Part V: The Honest Picture Part VII: The Trap →