To the Editor:Congratulations are due to Psychiatric Services for its 2001 series highlighting the importance of evidence-based practices. The message from editors and authors of articles in the series is resoundingly clear: public policy and treatment practices must be guided by the evidence. Despite the many, many benefits of practices that are based on evidence, some difficulties in definitively pointing to "sufficient research support" have been noted. Consensus among experts is one proposed way to resolve muddled data about any specific practice.
For example, in the 1990s the Agency for Health Care Policy and Research (AHCPR) (1) used expert opinion to bolster claims of an evidence base. In fact, AHCPR's third level of evidence rested almost entirely on expert opinion. Practice guidelines that rely entirely on expert consensus have been developed and published (2). In 2001, the Substance Abuse and Mental Health Services Administration listed expert opinion as sufficient for identifying an individual practice as evidence based and therefore suitable for federal funding.
The assumption here is that—by virtue of their expertise—advocates, clinicians, and researchers are able to set aside biases and judge an individual practice objectively on the basis of the data. Unfortunately, research evidence does not support these assertions. Study after study has shown that the anecdotal base on which expert judgment rests yields conclusions that are no better than those of the naïve public (3).
In fact, concern about the harm caused by clinical judgment led to the special series in Psychiatric Services on evidence-based treatment. Although consumer advocates' input into the form and quality of services is essential for service implementation, there is no evidence that as experts these individuals are monolithic in voice or have opinions that are any more accurate than those of clinicians. Nor do researchers necessarily perform better as experts. Rosenthal's (4) classic series of studies shows that the expectancies of the experimenter can influence the most rigorous of research designs. Moreover, therapy allegiances have been shown to bias the analysis and reporting of research findings (5). Cognitive-behavioral researchers are likely to obtain experimental results that are superior to those achieved through psychodynamic efforts, while dynamic researchers who undertake similar studies will obtain contrary results. Interpretation of the evidence is not cut-and-dried.
The moral from these data is clear: beware of experts. No assertion that a practice is evidence based should rest solely on opinion. However, despite this concern, we are not yet ready or able to throw experts out of the process. Policy makers and service administrators, in particular, must rely on authorities to identify the evidence-based practices that should be supported by public dollars.
Two actions will help policy makers assess expert opinion. First, "Show me the data!" Experts should be able to produce the studies that support a specific practice as being effective for a particular group. A consensus panel that promotes supported employment for persons with schizophrenia will be able to list far more studies that demonstrate its success than will a group that calls for inpatient psychoanalytic treatment. Second, mix up the backgrounds of members of consensus panels. The best way to avoid biases created by therapy allegiances is to develop groups of experts who represent the diversity of key therapeutic principles related to a specific population or problem. Any consensus that they achieve about an evidence base and corresponding practice is likely to escape the individual prejudice of therapy allegiances.
Dr. Corrigan is affiliated with the University of Chicago Center for Psychiatric Rehabilitation.