Please confirm that your email address is correct, so you can successfully receive this alert.
Fifteen years ago, a paper in the Journal of Clinical Psychiatry reported a new technology that assessed suicide risk more sensitively and accurately than experienced clinicians did (1). Three years later, a paper in the American Journal of Psychiatry described an effective new treatment for depression that was virtually free of side effects and was dramatically less expensive than established methods (2). Both approaches harnessed the same underlying technology, and both were published in leading psychiatric journals. One might have expected that by now these discoveries would either be in widespread use or have been refuted. Neither has occurred. These discoveries remain unchallenged, little known, and virtually unused clinically.
Psychiatrists' training is steeped in science, and psychiatric journals are crowded with articles that attempt to quantify the significance of their findings. Science is supposed to shape clinical decisions. And psychiatrists do regularly, rapidly, and dramatically change their behavior under the banner of new research findings. When pharmaceutical companies release new medications, buttressed by a wave of published research studies, specialist lectures, continuing-medical-education compact discs, and personable marketing representatives, psychiatrists often respond energetically, writing hundreds of thousands of new prescriptions within a few years.
Unsurprisingly, given that this is the Clinical Computing column, the assessment and treatment systems I mentioned above are built on computer technology. They are part of a compelling body of literature reporting the efficacy of computers in clinical psychiatry. Among other findings, this literature claims that patients like giving their histories to computers (3), that adolescents are more honest with computers than with clinicians (4), that computer-based assessments can be comparable to the best rating instruments used in research (5), and that computer-assisted treatments are effective and well received (2,6,7). Scores of papers, none convincingly challenged.
Why have computer-based systems not swept into clinical use? Rogers (8), in his seminal Diffusion of Innovations, studied the elements that determine whether an innovation spreads throughout a group. He and other innovation researchers have examined a host of historical successful and unsuccessful innovations and tried to identify the factors that explain the variance. His examples range from the adoption of scurvy prophylaxis by the British Navy to the popular rejection of the Dvorak typewriter-keyboard layout.
Consistently, these researchers have found that one cannot anticipate the reception of an innovation without understanding the conceptual and social framework of the receiving populace. Rogers recounts the experience of Nelida, a health worker in rural Peru who was trying to staunch an epidemic of typhus by getting local families to boil their drinking water. Nelida spent two years trying to convince 185 families; only 11 families adopted the practice. As Nelida found, in order to gain acceptance, it is not enough for an innovation simply to work better than the old approach. Emerson apparently oversimplified in his famous assertion, "build a better mousetrap, and the world will beat a path to your door."
Rogers's framework is helpful in understanding why, as with innovations in mousetrap design, computer-based advances in psychiatric treatment have been slow to catch on. He identifies 11 factors that he believes are the major determinants of the rate of acceptance of a new approach or method. Most of these factors are highly dependent on the social fabric of the community. The intrinsic advantages of the innovation—or at least the perceived advantages—count too. Typically, though, these perceptions are more influenced by the attitudes of neighbors than by an independent review of the evidence. Taken in aggregate, social forces consistently trump unvarnished effectiveness. Well-equipped marketers for change— "agents"—can tip the balance substantially, but they must accurately target these same social pressure points in order to succeed.
Nelida failed because her approach was antithetical to the social and conceptual orientation of the villagers. The villagers eschewed boiling because of a deeply held traditional belief that foods are either hot or cold and that everyone but the sick should drink cold water. Those who violated this norm were viewed as strange and were shunned. Although Nelida repeatedly and insistently told villagers that germs cause disease and that boiling kills germs, this perspective was alien and ultimately less compelling than maintaining social standing in the village.
Although there are many differences between introducing water boiling to a rural Peruvian village and promoting the use of computer technology in contemporary psychiatry, there are parallels as well. The Peruvians never took seriously the evidence presented by Nelida, despite her bringing in a public health physician to give a series of lectures, largely because their preexisting beliefs were so established and so contrary to Nelida's message. Similarly, psychiatrists may be disinclined to freely consider the clinical use of computers, and this reluctance may be due to preexisting beliefs that blunt consideration of the evidence.
Impeding psychiatrists' open consideration of the evidence is a long-held tenet, in place at least since Freud, that the psychiatrist and the patient should share information though speech. Psychiatric outpatient treatment is built on a foundation of two people in a room, talking. The dominance of this image is manifest in many aspects of psychiatric practice. Its force stifles not only new advances but even the use of established skills. For example, physical examinations, which are central to every psychiatrist's training as a physician, are infrequently performed in outpatient psychiatry (9). Clinically, there would seem to be risks associated with this omission (10).
In the late 1970s and early 1980s, a minor flow of publications examined the rate of physical exams performed by psychiatrists on their outpatients. Rates were low—typically less than 10 percent (9,10). At that time, the rate of medication use in the treatment of depression was nearing 40 percent (11). Some of the papers judged the lack of exams to be a malpractice risk. They argued, sensibly, that as physicians who use medications that cause a host of potentially serious side effects, including seizures, delirium, tardive dyskinesia, and drug-induced parkinsonism, physical examinations by psychiatrists should perhaps be standard care. By 1997, the use of medication in the treatment of depression had grown to almost 75 percent (11). Despite this increase, which one might have anticipated would have triggered a swelling debate about physical exams, the minor trickle of papers on physical exams essentially dried up. Although some might argue that a physical examination distorts transference or otherwise interferes with psychiatric treatment, the relatively passive acceptance of omitting the physical exam, especially for patients being treated with medications, seems more likely to reflect socialization than sound clinical judgment.
Unlike papers on physical exams, papers on the effectiveness of computers in psychiatry continue to be published, but they, too, have had little effect on clinical practice. Applying Rogers's model, there are multiple likely reasons for this inertia, including clinicians' dependence on reimbursement by managed care organizations— "collective innovation-decision"—and the efforts required by clinicians to adequately test the new systems—limited "triability." However, the lack of debate on the subject may reflect an especially deeply entrenched obstacle—the use of computers in evaluation and treatment might be alien to psychiatrists' self-image of their role.
Particularly germane to understanding psychiatry is what Rogers refers to as the nature of the social system. Some social systems, like the dot-com world before it burst, raced to be different. Psychiatry, on the other hand, fosters a more conservative professional identity. Like Nelida, promoters of computers in psychiatry face skeptical natives. Changes affecting the interaction between the psychiatrist and the patient are particularly sensitive, and, data or not, such proposals have a tinge of blasphemy.
But, unlike Peruvian villagers, psychiatrists have an identity that rests on being scientific, empirical, and open-minded. Those conveying the successes of computer-based methods in psychiatry will never be able to use the proven methods of the pharmaceutical industry: acting as an agent and widely affecting the educational and research apparatus of the profession. But as long as a whole body of unchallenged research describing safe and efficient methods of evaluation and treatment sits, largely ignored, in the middle of the published literature, there is an intellectual discontinuity in the field. Psychiatry should be challenged to apply its own scientific standards rigorously, and to practice accordingly. Incrementally, these voices will have influence, and the field will advance.
Dr. Freedman, who is editor of this column, is past-president of the American Association of Technology and Psychiatry and assistant clinical professor at the University of California, Los Angeles. Send correspondence to him at 235 Main Street, Apartment 218, Venice, California 90291.
Download citation file:
Web of Science® Times Cited: 6