When a Machine Fails, but a Human Knows: The Limits of AI in Psychiatry

Facebook
Twitter
LinkedIn
WhatsApp

MIA reports a recent study tried—and failed—to predict schizophrenia and bipolar disorder using machine learning.

The model, which was wrong 90% of the time, relied heavily on clinical notes that already indicated the problem. It wasn’t true prediction; it was a poor attempt at pattern recognition. And yet, the researchers called their model “feasible.”

This failure isn’t surprising to me. In my own life, I’ve seen the stark difference between artificial intelligence’s inability to truly understand the nuances of mental illness and the keen insight of a skilled human clinician.

When I was a teenager, my psychiatrist predicted my schizophrenia long before it fully emerged. She didn’t have an algorithm or a dataset; she had experience, intuition, and a deep understanding of psychiatric conditions. She ran psychological tests, observed my symptoms, and ultimately made a call that was heartbreakingly accurate.

Unlike so many others in the system, she treated me as a person, not just a diagnosis. Years later, I met up with her for lunch. We talked about the mental health system, how things had unfolded for me, and I gave her a copy of my book. She was the only person in the system who remained truly human with me. No condescension, no clinical detachment—just normal, human connection.

Compare that to the AI model, which flagged patients based on words like “voices” and “admission”—terms already noted by clinicians who had done the real work of observing and diagnosing. What’s predictive about that? If a patient is already hearing voices and being hospitalized, it doesn’t take an algorithm to suspect schizophrenia. And when a model is wrong 90% of the time, what use is it?

This raises a critical question: What does it mean when technology is prioritized over human expertise in mental health care?

We’re moving toward a world where AI is used to make critical decisions about people’s mental health, but what if the models are deeply flawed? What happens when algorithms make assumptions about who will develop schizophrenia, who will relapse, and who needs treatment—when they don’t understand context, human nuance, or the lived experience of mental illness?

Machine learning has its place, but it is not a replacement for human judgment. The best predictor of schizophrenia, in my experience, wasn’t an AI model—it was a psychiatrist who truly saw me.

As we push for mental health reform, we need to advocate for approaches that prioritize human connection over automation. No algorithm can replace the power of being truly seen, heard, and understood.

Mindful Living LCSW | 914 400 7566 | maxwellguttman@gmail.com | Website |  + posts

Max E. Guttman is the owner of Mindful Living LCSW, PLLC, a private mental health practice in Yonkers, New York.

Facebook
Twitter
LinkedIn
WhatsApp
Advertisement

Text, call, or chat with 988 to speak with the Suicide and Crisis Lifeline.

Help is available 24/7

Empowering Recovery: Mental HEALTH AFFAIRS BLOG

In a world filled with noise, where discussions on mental health are often either stigmatised or oversimplified, one blog has managed to carve out a space for authentic, in-depth conversations: Mental Health Affairs.Founded by Max E. Guttman, LCSW, the blog has become a sanctuary for those seeking understanding, clarity, and real talk about the complexities of mental health—both in personal experiences and in larger societal contexts.

Leave a Comment

Your email address will not be published. Required fields are marked *

Get Curated Post Updates!

Sign up for my newsletter to see new photos, tips, and blog posts.