The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
ViewpointFull Access

Good News: Artificial Intelligence in Psychiatry Is Actually Neither

Discussions about artificial intelligence in health care have raised concerns about the dehumanization of healing relationships (1). Reliance on “big data” to inform treatment decisions might lead to ignoring experiences and values that cannot be reduced to discrete data elements. Computer-generated recommendations may carry a false authority that would override expert human judgment. Concerns regarding the disruptive effects of artificial intelligence on clinical practice, however, probably reflect marketing hype more than near-term clinical reality. When actual uses of big data and machine learning in mental health care are considered, the term artificial intelligence is usually a misnomer.

Some health care applications of big data and machine learning may represent true artificial intelligence: computerized algorithms processing machine-generated data to automatically deliver diagnoses or recommend treatments. For example, an autonomous artificial intelligence system for diagnosis of diabetic retinopathy was recently approved by the U.S. Food and Drug Administration (2).

Clinical applications of so-called artificial intelligence in psychiatry, however, generally depend on human-generated data to predict human experience or inform human action. For example, as investigators, we might use clinical records to identify young people at high risk of a first episode of psychosis. Or we might use data from pretreatment clinical assessments to predict patients’ subsequent improvement with specific depression treatments. Or we might use data from interactions of human patients and human therapists to select helpful responses for an automated therapy program. Each of these examples involves use of machine learning and large records databases to develop prediction models or decision support tools. In each case, however, both the input data and the predicted outcome reflect human experience. There may be complicated mathematics in the middle, but human beings are essential actors at both ends.

Consequently, what is intelligent is not actually artificial. When we use clinical data to predict clinical outcomes or inform clinical decisions, machine learning depends on the results of past assessments and decisions made by human clinicians and patients. In hindsight, some assessments may have been more accurate than others, and some decisions may have led to better outcomes than others. Machine learning can help select the clinical decisions leading to the best outcomes. Artificial intelligence or machine learning tools, however, neither conduct assessments nor make decisions. Nor do they understand why some assessments were more accurate or why some decisions led to better outcomes. Humans explore, decide, experience, and evaluate. Machines simply aggregate and efficiently manipulate the intelligence that humans have created or discovered. We can certainly learn more quickly from the aggregated experience of millions than from the individual observations of a few. But the intelligence we aggregate is still fundamentally human. To use the language of machine learning: humans determine the features, and humans label the outcomes. Machines just select the best-fitting statistical relationship between the two.

Furthermore, what is artificial is not actually intelligent. The oldest machine learning tools resemble the regression models many of us studied in statistics classes. While newer machine learning methods may appear more complex or intelligent, they typically involve repetition of very simple building blocks (3). For example, we begin the first tree in a random forest by selecting a random sample of the observations we hope to classify. To create the first branch, we select a random sample of possible predictors. We then sort our observations by each of those predictors to find the one predictor that classifies best. The first branch is then complete, and our sample is divided in two. We then repeat that sorting step for each of the two second-level branches, then for each of the four third-level branches, continuing until the branches get too small to divide any further. Using paper and pencil, a human could create each of those branches. But sorting by each of 100 predictors at each of 100 branches in each of 100 trees would add up to sorting each of the observations up to one million times. A human attempting to create a random forest model would require unlimited time, unlimited paper and pencils, and an unlimited tolerance for boredom. Fortunately, computers can do that repetitive work for us.

Ironically, machine learning methods may be useful precisely because they lack human intelligence. Machines begin with no preconceptions, so they may detect patterns that humans overlook or ignore when conventional wisdom is not supported by data.

Ideally, our field would abandon the term artificial intelligence in regard to actual diagnosis and treatment of mental health conditions. Using that term raises false hopes that machines will explain the mysteries of mental health and mental illness. It also raises false fears that all-knowing machines will displace human-centered mental health care. Big data and advanced statistical methods have and will continue to yield useful tools for mental health care. But calling those tools artificially intelligent is neither necessary nor helpful.

Despite the buildup around artificial intelligence, we need not fear the imminent arrival of “The Singularity,” that science fiction scenario of artificially intelligent computers linking together and ruling over all humanity (4). For the foreseeable future, the most important data regarding mental health conditions will arise from human experience and be recorded by human patients and clinicians. While machine learning may manipulate those human-generated data to deliver treatment recommendations, those recommendations would be delivered to human patients and clinicians. A scenario of autonomous machines selecting and delivering mental health treatments without human supervision or intervention remains in the realm of science fiction.

Artificial intelligence as a term has marketing value, however, so it is unlikely to disappear. Buyers should therefore beware of exaggerated claims and unnecessary complexity. We can certainly point to examples of big data delivering useful predictions or advice to human clinicians and patients. But we cannot point to clear examples of more complex (and more opaque) statistical methods proving more useful than simpler (and more transparent) methods. In the example of models using health records data to identify people at risk of suicidal behavior, more complex modeling methods do not yield more accurate predictions, and simpler models facilitate practical implementation (5). For those who hope to profit from prediction models and other artificial intelligence tools, opacity and complexity are central to the business model. For customers who hope to implement those models, skepticism about opaque and proprietary statistical methods is definitely warranted.

If the artificial intelligence label is here to stay, we might still try to change the images it calls to mind. Instead of imagining an omniscient and powerful calculating machine wringing humanity out of mental health care, we could imagine an amusingly practical robot vacuum. That robot vacuum does useful work. It never gets bored, and it eventually covers every spot. But it is neither all-knowing nor all-powerful. In our clinical work, so-called artificial intelligence can be a useful tool for well-defined jobs for which we humans have neither time nor patience.

Kaiser Permanente Washington Health Research Institute, Seattle (Simon); Kaiser Permanente Northwest Center for Health Research, Portland, Oregon (Yarborough).
Send correspondence to Dr. Simon ().

This work was supported by cooperative agreement U19 MH092201 with the National Institute of Mental Health.

References

1 Verghese A, Shah NH, Harrington RA: What this computer needs is a physician: humanism and artificial intelligence. JAMA 2018; 319:19–20Crossref, MedlineGoogle Scholar

2 Abràmoff MD, Lavin PT, Birch M, et al.: Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. NPJ Digit Med 2018; 1:39Crossref, MedlineGoogle Scholar

3 Beam AL, Kohane IS: Big data and machine learning in health care. JAMA 2018; 319:1317–1318Crossref, MedlineGoogle Scholar

4 Flight of the Conchords: The Humans Are Dead. New York, Big Deal Music, 2008. https://www.youtube.com/watch?v=0BcFHvEpP7AGoogle Scholar

5 Kessler RC, Hwang I, Hoffmire CA, et al.: Developing a practical suicide risk prediction model for targeting high-risk patients in the Veterans Health Administration. Int J Methods Psychiatr Res 2017; 26CrossrefGoogle Scholar