The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
Published Online:https://doi.org/10.1176/ps.2009.60.5.698

Lack of adherence to clinical guidelines has been identified as a significant impediment to the implementation of evidence-based practices, to the effective dissemination of knowledge, and ultimately, to the improvement of quality of care ( 1 ). Studies have found that clinicians are particularly reluctant to recommend treatment switches in concert with clinical guidelines and to follow their dosage recommendations ( 2 ). A dramatic illustration of clinician nonconformance to a medication-switching algorithm appeared in a report published in this journal by Sernyak and associates ( 3 ). They developed a guideline based on the Schizophrenia Patient Outcomes Research Team (PORT) ( 4 ) that targets treatment-resistant schizophrenia and disseminated it to 27 outpatient clinicians at a public mental health facility. Although the clinicians indicated at the outset of the study that they understood and concurred with the guideline, a subsequent review of their records showed that for 22 patients with treatment-resistant schizophrenia for whom the guideline recommended a medication switch, clinicians followed the guideline in only one case (5%).

Clinicians were asked retrospectively why they elected not to switch treatments, and their reasons focused on two areas: concerns about the patient's adherence to treatment after the switch and the expectation that little progress would result from making a switch. Both have been identified as crucial factors for achieving a good clinical outcome and facilitating recovery, and they have a significant influence on the decision to endorse evidence-based practices ( 5 , 6 ). However, because the study's design did not permit a systematic manipulation of these factors, it was not possible to determine their importance and to examine their relative contribution to physician decisions. The data presented here are part of a study funded by the National Institute of Mental Health on how to more fully incorporate clinical guidelines into treatment decisions ( 7 ). The focus of this brief report is on determining whether a systematic manipulation of expected progress and adherence to treatment would enable us to understand more fully why clinicians failed to endorse the Sernyak guideline. The findings of the study presented here may inform subsequent efforts to improve the implementation of evidence-based practices.

Methods

The Sernyak guideline consists of five sequential steps. Treatment begins with a first-generation antipsychotic. An insufficient response triggers a switch to another first-generation antipsychotic, then to a second-generation antipsychotic, then to another second-generation antipsychotic, and finally to clozapine. Treatment response is gauged by ratings on the severity and global improvement scales of the Clinical Global Impression Scale (CGI) ( 8 ) after a three-month trial at maximum tolerable dosage. A switch is indicated when the severity score is at least 4, indicating moderate illness or worse, and the global improvement score is at least 3, indicating minimal improvement at best.

Twenty-one volunteer psychiatric residents with experience in treating patients with schizophrenia participated in our study, which was conducted in 2007. It was deemed important to limit the sampling frame to residents, because of the considerable evidence that trainees and experienced clinicians invoke different decisional processes ( 9 ). Only third- or fourth-year residents and fifth-year fellows were recruited. Of the 21 participants, 11 (52%) were third-year residents, five (24%) were fourth-year residents, and five (24%) were fellows. The 14 men (67%) and seven women (33%) had a mean±SD age of 33.4±3.6 years. Fourteen residents (67%) listed their race as Caucasian, six (29%) as Asian, and one (5%) as "other." One Caucasian male (5%) identified himself as Hispanic.

The funding source, as well as the Yale School of Medicine's Human Investigation Committee and the Department of Veterans Affairs Connecticut Healthcare System's Human Subjects Subcommittee, approved the study on condition that recruitment be done passively, through advertisement and word of mouth, in order to minimize any concerns that residents might be expected to participate or might believe that their performance could affect their status in the training program. Demographic characteristics of the sample roughly corresponded to the population that composed the training program. Residents were paid $100 for performing the one-hour task. Participant-level factors had no significant effect on endorsement ratings and were not included in further data analysis.

The residents completed a stimulus task consisting of 64 case vignettes (with fillers) that were constructed from a fully balanced set of five variables. Expected progress and adherence to treatment were two of the study's independent variables. The other three variables were those in the Sernyak guideline: CGI global improvement score, which constitutes a short-term progress assessment, CGI severity score, which summarizes the patient's current condition, and the guideline step (the design used steps 2 and 4 of the five-step algorithm). Four random orders of the 64 vignettes were created and randomly assigned to the residents, who indicated whether or not they endorsed the guideline recommendation. The study used a balanced 4 (expected progress) × 2 (adherence) design. The levels of these variables are described in Table 1 . Generalized estimating equation (GEE) modeling was used to analyze guideline endorsement. GEE is the model of choice for analyzing correlated binomial data and has been used by clinical services researchers to investigate the role of adherence in clinical outcome ( 10 ).

Table 1 Estimated mean endorsement rates of a medication-switching algorithm for treatment-resistant schizophrenia used in a vignette study with 21 psychiatric residents
Table 1 Estimated mean endorsement rates of a medication-switching algorithm for treatment-resistant schizophrenia used in a vignette study with 21 psychiatric residents
Enlarge table

Results

In concert with the guideline, all 64 case vignettes met the criteria for switching the current prescribed treatment, but participants recommended a switch in only 567 of the 1,344 vignettes (42% overall). On the basis of GEE analysis, the main effect of expected progress was significant (Wald χ2 =147.6, df=3, p<.001), whereas the main effect of adherence was not. Although adherence did not in itself have a significant influence on clinician endorsement, adherence did play an important role, as indicated by a significant two-way interaction of expected progress with adherence (Wald χ2 =10.6, df=4, p=.032). Table 1 illustrates the overall influence of expected progress and the moderating influence of adherence.

As shown in Table 1 , estimated population mean endorsement values range from 5% in the low-gain condition (where a medication switch likely would not produce significant progress) to 83% in the high-gain condition (where a medication switch likely would produce significant progress). Adding the adherence variable expands the high end of the range slightly, to 87%. Table 1 shows that in the ineffective condition (where a medication switch would likely have little effect), the estimated population endorsement rate was actually higher when adherence was low, but this finding was not interpreted because of a nonsignificant difference. (See Table 1 for a more thorough definition of the four expected progress conditions.)

The study design permitted exploratory analysis of how the three guideline factors—guideline step, patient's short-term progress, and patient's condition—may have further moderated the influence of forecasted progress on clinician endorsement. Because all levels of the three design variables fell within Sernyak's guideline, there was no a priori reason to expect them to have a direct influence (that is, main effect) on endorsement.

Estimated population mean endorsements for each two-way interaction involving expected progress are available as an online data supplement at ps.psychiatryonline.org . The data supplement shows that all of the guideline variables interacted significantly with expected progress. Taking these factors into account, endorsement ranges from 1% (for guideline step 2 in the low-gain condition) to 90% (for moderately ill patients in the high-gain condition). Note that endorsements in the low-gain condition range from 1% to 15% and approximate the 5% rate found by Sernyak and associates.

Discussion

A fairly clear picture emerges about why Sernyak and colleagues obtained a concurrence rate of only 5% to their treatment guideline. The study presented here found that one overriding factor—expected progress—determined the fit between general guideline and specific case. Clinicians tended to endorse the guideline when the treatment recommendation was expected to have a positive effect; they were inclined to reject the guideline when the expected outcome was negative. Rather than endorsing or rejecting the guideline as a whole, clinicians made this determination on a case-by-case basis.

A curious and perhaps surprising finding concerns the role of patient adherence to medication. Participants in the Sernyak study identified it as a key factor. By contrast, the study presented here found that adherence functioned as a moderating influence whose importance was amplified or diminished by other factors, such as the patient's condition and treatment response. The significant interaction between adherence and expected progress, together with significant three-way interactions of guideline factors with adherence and expected progress (shown in the data supplement available as an online supplement at ps.psychiatryonline.org ) suggests that the role of adherence may be difficult to evaluate because its effects are subtle and pervasive but not determinative. Although the influence of adherence on endorsement of clinical guidelines warrants further investigation, it seems reasonable to suggest that incorporating adherence into treatment decisions remains principally the task of clinicians, who are trained to exercise judgment in the application of general principles to specific cases.

Table 1 shows an endorsement rate of 35% for the ineffective condition (where a clinician would not expect to see progress from endorsing the guideline) and 43% for the high-risk condition (where endorsing the guideline would entail a substantial risk of negative outcomes). Reasons for these rates are not clear. Perhaps these conditions led participants to reject the guideline's recommendation at step 2 to switch to another first-generation antipsychotic and opt instead for a second-generation antipsychotic. Similarly, they may have preferred at step 4 to switch to clozapine therapy in lieu of the guideline's recommendation to switch to another second-generation treatment. Participants also may not have been satisfied with the Sernyak guideline's use of the CGI's severity and global improvement scales, insofar as the switching decision can turn on a mere 1-point difference on a single 7-point scale. Clinicians commonly consider a variety of other factors, such as specific symptoms, both positive and negative; patient self-regulation and perception of illness; factors that have an impact on functioning and quality of life; patient goals and preferences; and response at lower than maximum tolerable dosage. Several of these factors play a role in the PORT, American Psychiatric Association ( 11 ), and Texas Implementation of Medical Algorithms ( 12 ) guidelines, which also allow for partial responses, adjunctive treatments, and depot medications and combinations. In addition, they include the express qualification that guidelines should be tailored to the needs of individual patients.

The advantage of using simple assessment measures, such as the CGI, is that their role in decision making can be captured readily, but this advantage is realized only when clinicians believe they have sufficient information to make a treatment decision. Otherwise, factors outside the guideline will have a significant bearing on their decisional processes ( 13 ). One possible avenue to increasing endorsement when a patient's expected progress is neither low nor high is to draw on the expertise of clinicians at the outset of guideline development, to identify the factors that balance simplicity with comprehensiveness, and to establish clear and clinically meaningfully criteria for determining whether the patient's response is partial or clearly inadequate.

Interpretation of the findings is limited in two important ways. First, the overarching purpose of the funded project is to identify strategies designed to promote implementation of evidence-based practices. The study targeted psychiatric trainees for two related reasons. First, studies of clinical decision making have documented that trainees and experienced clinicians contemplate their options, invoke decisional processes, and make judgments in substantially different ways ( 14 ). In addition, one of the functions of a training program is to develop clinical decision-making skills, and focusing specifically on residents facilitates the development of strategies that can be incorporated into psychiatric training. However, even if we assume that the relatively small sample of participants (N=21) is representative of their training program, they nonetheless are drawn from one residency program that may in some respects may be atypical. It cannot be assumed that similar findings would be obtained if the study had been administered at other programs or at programs that focus on other psychiatric specialties or disorders other than treatment-resistant schizophrenia. Certainly, the findings should not be generalized to trainees from other disciplines, such as social work, psychology, or nursing. Although Sernyak and associates drew from one specific residency program, they also used attending physicians and public mental health practitioners not connected to a university. We cannot be sure that this group would have treated expected progress or patient adherence in the same way as the participants in this study.

The second principal limitation concerns the use of vignettes in lieu of actual cases or retrospective chart reviews. Whereas the latter constitute actual treatment decisions, vignettes can be likened to recommendations or preferences. Participants in this study had very limited clinical information, they had no investment in their decisions, there was no relationship between clinician and patient, and the decision makers were not accountable for the results. Consequently, we cannot be sure that the same decisional processes are invoked when these participants make actual treatment decisions and when they respond to case vignettes.

Conclusions

It can reasonably be concluded that despite an extremely low overall endorsement rate, clinicians incorporated the Sernyak guideline into their treatment decisions. The challenge for future developers is to fashion guidelines that promote their use as decision aids and facilitate endorsement in a manner that addresses the needs of individual patients ( 15 ). The task is not easy. Endorsement is likely to improve when clinicians have information that is sufficient but not superfluous and when guidelines use criteria that are comprehensive and flexible but promote a clear and consistent application. Residency programs in particular are likely to have an interest in using guidelines that promote learning and incorporate evidence-based practices through sound decisional processes. Finally, a useful test for an effective guideline is how well it can assist clinicians with marginal cases, where risks are high and prospects for improvement are relatively limited. It is in these cases that clinicians may be most reluctant to recommend a treatment switch, even though alternatives such as clozapine exist, which remains arguably the most efficacious yet underprescribed therapy for treatment-resistant schizophrenia.

Acknowledgments and disclosures

This work was supported by grant R34-MH070871 to the lead author from the National Institute of Mental Health. The authors thank Robert M. Rohrbaugh, M.D., and Lee R. Beach, Ph.D., for their generous assistance in developing and implementing this study.

The authors report no competing interests.

Dr. Falzer is affiliated with the Clinical Epidemiology Research Center, Department of Veterans Affairs Connecticut Healthcare System, 950 Campbell Ave., Building 35A, Mailcode 151B, West Haven, CT 06516 (e-mail: [email protected]). Ms. Garman is with the Southwest Connecticut Mental Health System, State of Connecticut Department of Mental Health and Addiction Services, Bridgeport. Dr. Moore is with the Department of Psychiatry, Yale School of Medicine, New Haven, Connecticut.

References

1. Chilvers R, Harrison G, Sipos A, et al: Evidence into practice: application of psychological models of change in evidence-based implementation. British Journal of Psychiatry 181:99–101, 2002Google Scholar

2. Dickey B, Normand S-LT, Eisen S, et al: Associations between adherence to guidelines for antipsychotic dose and health status, side effects, and patient care experiences. Medical Care 44:827–834, 2006Google Scholar

3. Sernyak MJ, Dausey D, Desai R, et al: Prescribers' nonadherence to treatment guidelines for schizophrenia when prescribing neuroleptics. Psychiatric Services 54:246–248, 2003Google Scholar

4. Lehman AF, Steinwachs DM, Dixon LB, et al: Translating research into practice: the Schizophrenia Patient Outcomes Research Team (PORT) treatment recommendations. Schizophrenia Bulletin 24:1–10, 1998Google Scholar

5. Fenton WS, Blyler CR, Heinssen RK: Determinants of medication compliance in schizophrenia: empirical and clinical findings. Schizophrenia Bulletin 23:637–651, 1997Google Scholar

6. Pyne JM, McSweeney J, Kane HS, et al: Agreement between patients with schizophrenia and providers on factors of antipsychotic medication adherence. Psychiatric Services 57:1170–1178, 2006Google Scholar

7. Falzer PR, Moore BA, Garman DM: Incorporating clinical guidelines through clinician decision making: study protocol. Implementation Science 3:13, 2008Google Scholar

8. Guy W: ECDEU Assessment Manual for Psychopharmacology. DHEW pub no 76-338. Rockville, Md, US Department of Health, Education, and Welfare, Public Health Service, Alcohol, Drug Abuse, and Mental Health Administration, 1976Google Scholar

9. Patel VL, Arocha JF, Kaufman DR: Diagnostic reasoning and medical expertise, in The Psychology of Learning and Motivation: Advances in Research and Theory, Vol 31. Edited by Medin DL. New York, Academic Press, 1994Google Scholar

10. Ascher-Svanum H, Faries DE, Zhu B, et al: Medication adherence and long-term functional outcomes in the treatment of schizophrenia in usual care. Journal of Clinical Psychiatry 67:453–460, 2006Google Scholar

11. Practice guideline for the treatment of patients with schizophrenia: American Psychiatric Association. American Journal of Psychiatry 154(supp 4):1–63, 1997Google Scholar

12. Texas Implementation of Medical Algorithms. Austin, Texas Department of State Health Services, 2007. Available at www.dshs.state.tx.us/mhprograms/TIMA.shtm Google Scholar

13. Fayek M, Flowers C, Signorelli D, et al: Psychopharmacology: underuse of evidence-based treatments in psychiatry. Psychiatric Services 54:1453–1456, 2003Google Scholar

14. Patel VL, Arocha JF, Kaufman DR: Expertise and tacit knowledge in medicine, in Tacit Knowledge in Professional Practice: Researcher and Practitioner Perspectives. Edited by Sternberg RJ, Horvath JA. Mahwah, NJ, Erlbaum, 1999Google Scholar

15. Maier T: Evidence-based psychiatry: understanding the limitations of a method. Journal of Evaluation in Clinical Practice 12:325–329, 2006Google Scholar