My group began doing research on medication safety about ten years ago, after the Medical Practice Study demonstrated that medications were the leading cause of adverse events (1). Our first small study looked at a sample of medical, surgical, and obstetric patients (2). We found higher-than-expected rates of adverse drug events among medical and surgical patients and only a small number of such events among obstetric patients, who received relatively few drugs. To identify prevention strategies that could improve medication safety, we then undertook a much larger study, the Adverse Drug Event Prevention Study (3,4), which classified serious errors in a sample of medical and surgical patients at two large hospitals.
Our samples included few psychiatric patients, because one of the main hospitals we studied did not have an inpatient psychiatric unit and we wanted to be able to match by unit type. Later, to evaluate the generalizability of some of our previous findings and, in particular, to evaluate the costs of adverse drug events, we conducted another study at the University of New Mexico (5) in which we found overall rates of adverse drug events and costs that were fairly similar to those in our previous work, suggesting that the problem of adverse drug events appears to be widespread.
But as often happens in research, something striking emerged that we hadn't been looking for: the highest rate of adverse drug events by far was in the inpatient psychiatric unit, where many admissions were related to problems patients were having with their medications. On reflection, the high rate was not surprising: psychiatric care relies heavily on medications, which are relatively toxic, although highly beneficial overall.
We subsequently reviewed the literature and found relatively few studies of preventable adverse drug events in the area of psychiatry, and we found even fewer studies of medication errors. We then completed a small study of the frequency of medication errors in one psychiatric institution (6). We have recently begun a much larger inpatient study with support from the Agency for Healthcare Research and Quality.
Other areas of medicine, such as internal medicine, surgery, and the emergency services, are beginning to more closely examine the safety of care, and we believe that psychiatry should do the same. Because of the importance of medications and early data described above, we believe that medication safety is a logical place to begin for psychiatry and that it is likely an important issue both inside and outside the hospital.
To be fair, because psychiatry has been a leader in pharmacology, patients with mental illness have far better outcomes and more hope than they did 30 years ago (7). For example, the new antipsychotics and serotonin reuptake inhibitors have represented especially important advances. However, many of these drugs also have important adverse effects. Moreover, many of these drugs are used in wide dosage ranges, which can make it difficult to identify in research data whether a given dosage is the one actually intended. Also, many of these drugs are used to treat elderly persons, children, or other groups of patients who may be unable to advocate for themselves. Recent studies have determined that psychiatric medications account for a major proportion of adverse drug events among elderly persons (8). Psychiatric patients, especially those in the outpatient setting, may be less adherent to medication regimens than other groups of patients, in part because of issues related to their illness.
All these factors suggest that it is time for psychiatry to aggressively address some of the important issues in medication safety—determining the frequency of medication errors and adverse drug events in a variety of settings and developing the best strategies to prevent them. Such strategies are likely to overlap with important strategies in other areas of medicine, but they will also have many features that are unique to psychiatry.
It is tempting to speculate about the reasons for psychiatry's late arrival on the medical error scene. The cases of medical error that come to public attention involve invasive procedures of the kind rarely performed by psychiatric practitioners. Many of the cases of medical error occur in hospital practice, which is less common in psychiatry than in other clinical specialties and has become almost a subspecialty in our field. Perhaps more important, our modal practice is often solitary—one patient, one physician. As a result, we have little access to the kind of aggregate data that are necessary to call attention to medical error as a serious, widespread phenomenon.
Furthermore, psychiatric practice is intensely private, in part because of the need for confidentiality. The practice of surgery and other procedure-oriented specialties is necessarily conducted in public, in the sight of other professionals, where errors and the "near misses" that are so crucial for error prevention cannot easily be overlooked or dismissed. In psychiatric practice, errors and near misses may not be identified by anyone but the practitioner. Such errors may lead to guilt and to the practitioner's resolve to do better, but the growing technology of error prevention seems largely beside the point or not readily applicable. Finally, of course, the essence of error prevention—focusing on the system, not the clinician—represents a new way of thinking about treatment processes, a way of thinking that is diametrically opposed to our customary mode of analysis of events.
The paper that ignited the current interest in medical error appeared in JAMA in 1994 (9). Its author, Lucian L. Leape, provided the first overall estimate of the extent of medical errors. He summarized information from other potentially harmful undertakings, such as commercial aviation and nuclear power generation, and discussed how findings from the psychology of human factors were being used to reduce errors in these fields. These error reduction techniques were based on a strikingly different mind-set than that with which analysis of errors in medicine was approached. Leape noted that the medical approach focused on who made the error so that that person could be punished as a preventive measure. Leape called this a "guilt culture." In contrast, the industries that have dramatically reduced error have achieved that result by concentrating on the system factors that made it possible for well-trained, well-intentioned practitioners to commit errors. Substituting a system approach for a "find the culprit" approach has paid off impressively in these fields. Leape asserted that a drastic culture change in medicine would be necessary before medical error could be substantially reduced.
Training in psychotherapy may inadvertently predispose psychiatrists to perpetuate the culture of guilt. We are taught to help our patients take responsibility for their actions. When patients report their problematic situations to us, our professional reflex is to seek to understand the role that they played in bringing the situation about. Although we try to mitigate guilt, our focus is clearly on "whodunnit." To emphasize the role of systems in their problems would tend to support rationalization and denial—defense mechanisms that are generally counterproductive. However, a nonpunitive approach to analyzing medication errors allows identification of system failures without implying that someone must have acted irresponsibly. For psychiatric practitioners to emphasize system factors in medical errors requires a drastic change of mind-set that may partly explain our failure to move this matter higher on the psychiatric policy agenda.
Although we do not perform surgery or many other dangerous procedures—and although we tend to practice in private, with only our own habits and experience to alert us to errors—psychiatrists should clearly be in the mainstream of concern about errors. A potent source of errors and adverse events is medication prescription, delivery, and use. In the Harvard Medical Practice Study, adverse drug events accounted for 19 percent of all injuries to patients (1). And we are prolific prescription writers. A very large proportion of medication prescriptions are for psychiatric medications, although by no means are all of them written by psychiatrists. In addition, more than a fourth of all hospital admissions are for psychiatric hospitalizations (10).
Fortunately, the American Psychiatric Association (APA) has taken note of the issue of medical error and is actively seeking to enlist all of its members in a national program of patient safety and error prevention. After a task force issued a report on patient safety last year (11), an APA committee was formed to lead the charge. Its aim is to promote a culture of safety among APA members, particularly among residents and other trainees. The focus is on three main areas: reduction of medication errors, safe use of seclusion and restraint, and reduction of suicide in inpatient and residential settings. The means chosen include lectures and workshops at national APA meetings, articles in Psychiatric News and Psychiatric Services, and programs at district branches. Plans to foster research in patient safety in psychiatry are being considered.
Promoting the culture of safety is not easy, and it will take time. Feeling guilty about mistakes and trying harder to prevent further errors are so ingrained among medical practitioners and other clinicians that a focus on the analysis of systems factors will not happen overnight. It behooves all practitioners to educate themselves about the culture of safety, seek help in analyzing their own mistakes, and look widely to take advantage of best practices by others.
The purpose of this commentary is to highlight what is known about medication errors from the perspective of patients who are undergoing psychiatric treatment in hospitals and their families. Among patients generally, a small but growing group of people affected by medication and other errorsare reporting their experience to fledgling patient support groups that have emerged in a number of states. These patient support groups are devoted exclusively to patient safety issues and focus on the patients and families affected. Their purpose is to support people in the aftermath of error, a period in which no support is provided by the formal health care system. Self-identificationand networking among patients and their family membersare helpingto shed light on medical errors from the perspective of those who experience them firsthand.
Notably absent fromthis self-selected group of people who come forwardare patients who have been hospitalizedfor psychiatric treatment and their families. Various factorslikely determine whether injured or disabled patients or family members self-identify, and thesemay include understanding that an error occurred and having the ability to act on events that are understood as warranting significant redress. The degree to which patientswith psychiatric conditionsexhibit such characteristics will determine the likelihood that they or their families will come forward to report medication errors.
No systematic data are collected on patient and family reports of medication and other errors, because no mechanism exists to do so. However, documented anecdotal evidence can yieldimportant insightsintothe types of errors that occur, thecircumstances surrounding the error,the manner in which the error was disclosed or not disclosed, and actions taken, if any, to prevent subsequent errors from occurring. Such data also can highlight howastute patients and families prevent medication errors and how steps can be taken to engage patients and families in being vigilant about their own medication use. Without information about the typesof errors that occur, it is more difficult to alert patients and families to how they mightpreventsome errors, although it is the responsibility ofhealth care organizations to have systems in placeto make medication errors and other types of errors as rare as possible.
Much better data are needed onthe frequency and cause of medical errors generally, and medication errors in particular, in psychiatry as well as in other specialties. The national estimates of medical errors highlighted in the Institute of Medicinereport To Err is Human (12)were based on the best research available. That being said, the Harvard Medical Practice Study examined medical records of patients hospitalized in 1984 (1). The Utah and Colorado study wasbased on patients hospitalized in 1992 (13). No research is currently under way to update this information. In any other industry in which preventable deaths in such large numbers occur annually,reliance on information that is between ten and 20 years old would be highly unusual, which highlights the importance and need forbetter data on medication and other errors in psychiatry and other specialties.
It is difficult to solve problems without knowinghow often they occur. It is even more challenging to instill confidence in patients and the general public that errors are being addressed with the urgency they require. And, finally, without a baseline, it is impossible to knowwhetherprogress is being made to makehealth carein the United States safer for us all.
Assume for a moment this worst-case scenario: that the Institute of Medicine is correct in its estimate of the frequency, cost, and harm of adverse drug events; that between 28 and 56 percent of these events are preventable; and that more deaths occur annually from medication errors than from industrial accidents. Assume further that because psychiatric hospitalizations account for roughly a quarter of all hospitalizations, they account for the equivalent proportion of all adverse drug events. With these assumptions in mind, our attention would quickly shift from debating the accuracy of the data to detailing strategies to reduce harm.
Once we accept the institute's conclusions, the strategies for harm reduction are a collection of overlapping proposals that spell out the conventional wisdom of what is now referred to as the patient-safety movement. These include replacing explanatory models of error that focus on individual shortcomings with a systems approach to error. Thus, instead of locating the source of error among individuals whom we "name, blame, and shame" for their shortcomings, we now locate errors in systems that failed to build in the checks, safeguards, and redundancies that protect against inevitable human failure. To accomplish this change, we need to promote a nonpunitive workplace culture that encourages reporting of dangerous conditions and identifies sources of latent failure in advance of adverse events. A workplace culture that is nonpunitive develops reporting systems for both near misses and adverse events. These reporting systems encourage organizational learning that then promotes the development of a culture of safety within the culture of the health care workplace.
As utopian visions go, this is not a bad one. But as a blueprint for change, it is extraordinarily hard to follow. First, take the concept of "system error." It is one thing to believe that errors are embedded in systems. But how does one define the boundaries of this system? How does one identify the relevant subunits and specify the relationships among them? What is the relationship, for example, of administrative units that establish rules and regulations at the "blunt" end of the system and workers who implement them at the "sharp" end? System error may well be a useful metaphor, as well as a corrective for a too individualistic view of how events occur. Its application, however, is far from straightforward, whatever benefits it promises.
It may well be the case that a medical system that emphasizes the individual responsibility of physicians discourages an open discussion of error. No doubt much is to be gained from reducing this dimension of the workplace culture of physicians. However, cultures change even more slowly than battleships turn. Furthermore, elements of a culture do not exist in splendid isolation from one another. In medicine much is said to depend on notions of individual responsibility that encourage physicians to work to the utmost for the patient's good, to act as the patient's fiduciary. Undoubtedly, inflated notions of individual responsibility degrade safety on occasion, but these same notions at other times may promote the extra effort and dedication that high-quality care requires.
This is not a plea to retain naming, blaming, and shaming. Rather, it is an invitation to think about how notions of individual responsibility function in the culture of medicine and to ask about what limits might exist to curbing the processes of naming, blaming, and shaming. It is also an invitation to appreciate the costs involved in our current practices for instilling a sense of professional responsibility and to think creatively about alternatives. And it is an invitation to think about mismatches created by changes in the organization of medical practice and by stasis in the organization of medical training. How to create a systems view of error without eroding an individual's sense of professional responsibility is a challenge that needs to be faced squarely by advocates for a culture of safety. Much of the push-back from practitioners who resist the formulas of a systems approach to error comes from how fundamentally the systems view conflicts with their idea and ideal of what it means to be a physician.
Safety advocates then need to be clearer that system changes that promote safety also create new vulnerabilities. To concentrate resources to defend against one problem is to expose oneself to another. An integrated power grid provides benefits that a nonnetworked system does not. However, as we all recently learned, it has different vulnerabilities. If the threshold for warning systems, redundancies, and safeguards is set too low, workers learn to ignore signs of trouble, as anyone who has had their smoke detector go off upstairs while they are frying food in the kitchen knows. If the threshold of warning systems is set too high, disaster sometimes arrives unannounced. When systems have cues for recognizing trouble, operators still need to recognize those cues and respond appropriately.
The inherent difficulty of responding to cues appropriately gives us reason to be skeptical about whether organizations will solve anytime soon the problem of how to design reporting systems so that safety lessons are learned. Consider the problem of reporting near misses. There is much to suggest that physicians have trouble seeing near misses. What isn't noticed can't be reported and won't be learned from. Certainly, in an organization that has the resource constraints and production pressures of a hospital, near misses are difficult to recognize and attend to. Moreover, the near misses that are seen are so obvious as to be trivial. Dangerous near misses are, as a rule, only appreciated as harbingers of disaster after disaster has materialized. Until then, they are weak or missed signals. They become clear only with the application of hindsight bias. Once we have the outcome firmly in hand, the causal links are much easier to trace backward. Working prospectively from the facts known at the time is more difficult.
These brief remarks are not meant as a counsel of despair when it comes to reducing adverse drug events in particular or improving patient safety in general. Rather, they are reminders that much thought is needed before a few well-intended policy directives are transformed into effective action. They are also a reminder that safety is not something achieved with a policy; rather, it is an aspirational goal—success creates new possibilities of failure and new challenges. A culture of safety is not a matter of implementing this or that policy; rather, it requires continual adjustment to new organizational realities, new technologies, and new treatments—all of which carry new risks and benefits and require new safety strategies.