The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×

Abstract

Clinical practice is assumed to be informed and supported by evidence-based clinical research. Nonetheless, clinical practice often deviates from the research evidence base, sometimes leading and sometimes lagging. Two examples from integrated care in mental health care (care for serious mental illness and collaborative mental health care in primary care settings) illustrate the natural space and therefore tension between evidence and implementation that needs to be better understood. Using the tools and perspectives of both examples, the authors present a framework for the connected relationship between practice and research that is founded on measurement and uses iterative adaptation guided by oversight of and feedback from the stakeholders in this process.

Implementation of evidence-based practices (EBPs), which aims to integrate the best available evidence into clinical practice, has become one of the primary mandates for state and health care organizations in their efforts to improve quality of care. These efforts are supported by the development of practice guidelines and registries of EBPs (e.g., Patient Outcomes Research Team, Department of Defense/Veterans Affairs) (1,2). Unfortunately, the 17-year lag between research outcomes and the delivery of such EBPs in clinical care is practically legendary. Utilization of clozapine and the individual placement and support employment model provide two examples of interventions that have an extensive evidence base yet are still underused. In these instances, practice clearly lags behind the research. To accelerate implementation of EBPs, implementation science has developed a core set of methods, and although some of these are successful, at times this work has revealed limitations of EBPs, and outcomes comparable to those observed in research have not been attained (3) Our experience suggests that the focus of implementation science has been largely unidirectional, either to implement a new practice or to “course-correct” when there is substantial drift in clinical practice, with a starting point of assuming that practice is “broken.”

Less recognized but perhaps equally important in improving quality of care are efforts to deliver services that may lack a solid research foundation or go beyond or deviate from research evidence. New clinically based practices can result from careful innovation and creative efforts to tackle challenges in care delivery that are not adequately addressed by established EBPs. Arguably, the rapid rise of peer and family support interventions initially evolved without an array of pre-existing randomized controlled trials. Such trials were eventually conducted, creating a situation in which research initially lagged behind practice and then self-corrected.

We assert that efforts to implement EBPs built on scientific research or to implement practices without a fully developed evidence base could each either optimize or degrade care and outcomes. The result—unfortunate inevitability versus creative tension from which to learn—depends on the extent to which these two forces in care—evidence-based science and the focus on local adaptation and experience-driven innovation—remain tethered and in reach of each other, each using the tools and perspectives of the other, built around a backbone of measurement and iterative adaptation that is guided by oversight of stakeholders, especially patients. What is called for is the expectation that clinical settings be designed as learning environments and that research be designed to accommodate flexibility and to be studied in routine practice. This expectation requires a culture change for both clinical leadership and researchers.

Two case examples illustrate the tension in the implementation of evidence-based practice in behavioral health care. Both examples involve the use of integrated care (IC) to justify changing the delivery of mental health care. Case 1 is an example of applying a model of IC despite scant evidence that outcomes improve and without having a framework for measuring this adaptation. Case 2 is an example in which the EBP for IC is overgeneralized and implemented without fidelity to the research, again without measuring these changes. Both cases illustrate how applied clinical practice deviates from the evidence base and why understanding whether the deviation optimizes or degrades care is difficult.

Case 1: Integrated Care for Serious Mental Illness

People with serious mental illnesses have a greater burden of general medical illness and higher mortality than the general population (4,5). The challenge of providing adequate medical care to individuals with serious mental illness has resulted in multiple innovative interventions, including involving a primary care provider in mental health clinics in order to reach this hard-to-treat population (6). For example, one author oversees a state-funded assertive community treatment team that received a two-year grant to add an advanced practice nurse to improve access of its serious mental illness population to primary medical care. This staffing change occurred despite the lack of infrastructure to assess the impact of this addition and little ability to have this program become self-sustaining. Innovations like these often have been implemented ahead of evidence for better health outcomes. There is some support for improvement of care processes with different IC models but almost no evidence that any IC model improves short-term or long-term health outcomes (7,8).

Are current forms of IC positive and progressive adaptations, or has the implementation of IC in this population moved too far beyond the evidence? Beyond process improvement, the value of IC could involve other important outcomes such as satisfaction of stakeholders, sustainability of revenue support, and consolidation of resources. However, without assessment of these variables, IC has the danger of failing with time, either as a result of faddism or results that are other than as expected and desired.

In this case, implementation beyond the evidence should encourage us to consider the potential harm or cost of adaptation that strays too far from the evidence and prompt us to evaluate current practices through formalized evaluation. From a cost-benefit viewpoint, IC may be redirecting resources from other programs that are more definitively known to affect mortality and morbidity—programs that treat obesity or aid in smoking cessation, for example, or platforms that improve housing, education, and employment. Measuring the effect of these adaptations would not only build the evidence base for (or against) IC, but also provide a set of checks and balances on research and evidence-driven clinical care.

Case 2: Integrating Mental Health Care Into Primary Care Settings

For individuals with mild to moderate depression, a strong evidence base exists for integrating behavioral health care into primary care settings in order to improve access and outcomes (912). Such programs were developed as part of the chronic care model first described by Katon and Sullivan (9). The evidence base was developed by teaming care managers (masters’-level mental health professionals, psychologists, and nurses) with primary care providers under the supervision of psychiatrists. Key features of the model include the use of clinical information systems and measurement-based care to drive treatment planning, as well as an emphasis on brief, focused behavioral interventions such as problem-solving therapy or behavioral activation, development of self-efficacy and self-management, psychiatrist supervision, and short-term engagements (2).

Some clinical practices have broadly adapted the original IC model for mild to moderate depression, applying the same principles to a broader array of mental health conditions, patients with more severe illness, and patients with more substantial psychosocial needs, including housing, employment, and case management. These adaptations have also encompassed less fidelity to key components, such as brief treatment, psychiatric supervision, and measurement-based care, and have resulted in the rise of colocated care focused on a smaller number of more severely ill patients. Such modifications are happening despite a lack of a priori evidence for them.

In this case the consequences of a lack of fidelity to the original IC model and its expansion to other patient populations need to be assessed to determine the model’s effectiveness. Decreased fidelity may well lead to potentially equivocal or even negative outcomes, whereas overgeneralization of IC may lead to an overextension of resources toward managing complex cases at the expense of those who are definitively known to benefit from IC. For example, the primary care practice of one author’s academic medical center has been screening all patients annually with the Patient Health Questionnaire–9 but chooses to refer the patients with more complex illness, leaving those with less complex illness to be treated perhaps inadequately. The social worker spends time triaging patients and does not systematically track or use measurement-based approaches, and the system has no mechanism to assess the success of these adaptations.

Discussion and Conclusions

These two IC examples help illustrate the tension between evidenced-based care and its application to clinical practice. Ideally, all clinical practice would be founded on adequate evidence and implemented with high fidelity, and effects of adaptation and innovation would be studied and understood. IC would be implemented to function as a learning environment informed by patient-level outcomes and program evaluation. However, EBPs cannot keep pace with changes in health care systems and evolving knowledge, and clinical trials cannot possibly study every nuance of clinical practice, and thus compromises are made. Further, the termination of National Registry of Evidence-Based Programs and Practices underscores the ambiguity regarding the value of these registries. Overall, the required compromises must be better understood. Chambers and Norton (13) offered one framework based on several truisms: programmatic drift from an initial EBP is inevitable, adaptation can be positive or negative, evidence always evolves, and there is a bidirectional relationship between evidence and implementation/dissemination. This framework suggests that clinical practice should exist in a learning environment in which adaptation is recognized and embraced, adaptation should inform the evidence base, clinical practice should implement measurement-based care so that outcomes may be monitored, and evidence and practice should be tethered at an undefined but appropriate distance.

Most important, a culture in which time is periodically taken to assess current practice, as well as whether and in what ways it deviates from EBP, is paramount (14). Meanwhile, research needs to evolve to conduct more pragmatic trials that are done in community practices, using frontline clinicians and involving broader patient populations.

The belief that implementation is a linear process from intervention development to implementation may not only be incorrect but detrimental to desired outcomes. Further, far too often, implementation science is used at times of substantial separation between research and practice. There is a natural space and therefore tension between implementation and evidence that needs to be recognized and understood. Evidence and implementation orbit each other; sometimes evidence outstrips implementation and sometimes vice versa. Negotiating the space between these two poles requires the creation of learning environments in which the methods created by implementation science become routine for both clinical practice and research design. There is always the need to take into account variability in clinical practice that leads to adaptations to the evidence base in order to fit clinical realities. The unfounded belief in the absolute validity of a static “evidence base” as opposed to an organic evolution of improvements and setbacks between evidence and practice risks inappropriate allocation of resources and missed opportunities. Thus we must pay close attention to the deliberate movement from evidence to practice and acknowledge the importance of feedback in this process.

Dr. Oslin is with the Department of Psychiatry, University of Pennsylvania, Philadelphia. Dr. Dixon is with the Department of Psychiatry, Columbia University Medical Center, New York. Dr. Adler is with the Department of Psychiatry, Tufts Medical Center, Boston. Dr. Winston is with the Department of Psychiatry, University of Colorado School of Medicine, Aurora. Dr. Erlich is with the Department of Psychiatry, Columbia University, and the New York State Psychiatric Institute. Dr. Levine is with the Mental Illness Research, Education and Clinical Center, U.S. Department of Veterans Affairs, New York, and the James J. Peters Veterans Affairs Medical Center, Bronx, New York. Dr. Berlant is with Optum Health, Meridian, Idaho. Dr. Goldman is with Blue Cross–Blue Shield, Detroit. Dr. First is with the New York–Presbyterian Hospital, New York. Dr. Siris is with the Department of Psychiatry, Zucker-Hillside Hospital, Glen Oaks, New York.
Send correspondence to Dr. Oslin (e-mail: ).

The authors report no financial relationships with commercial interests.

References

1 Kreyenbuhl J, Buchanan RW, Dickerson FB, et al.: The Schizophrenia Patient Outcomes Research Team (PORT): updated treatment recommendations 2009. Schizophrenia Bulletin 36:94–103, 2010Crossref, MedlineGoogle Scholar

2 Management of Major Depressive Disorder Working Group: VA/DoD Clinical practice Guideline for the Management of Major Depressive Disorder. Washington, DC, US Department of Veterans Affairs, Department of Defense, 2016Google Scholar

3 Paulus MP: Evidence-based pragmatic psychiatry: a call to action. JAMA Psychiatry 74:1185–1186, 2017Crossref, MedlineGoogle Scholar

4 Janssen EM, McGinty EE, Azrin ST, et al.: Review of the evidence: prevalence of medical conditions in the United States population with serious mental illness. General Hospital Psychiatry 37:199–222, 2015Crossref, MedlineGoogle Scholar

5 Olfson M, Gerhard T, Huang C, et al.: Premature mortality among adults with schizophrenia in the United States. JAMA Psychiatry 72:1172–1181, 2015Crossref, MedlineGoogle Scholar

6 Scharf DM, Eberhart NK, Hackbarth NS, et al.: Evaluation of the SAMHSA Primary and Behavioral Health Care Integration (PBHCI) grant program: final report. Rand Health Quarterly 4:6, 2014MedlineGoogle Scholar

7 Druss BG, von Esenwein SA, Glick GE, et al.: Randomized trial of an integrated behavioral health home: the Health Outcomes Management and Evaluation (HOME) study. American Journal of Psychiatry 174:246–255, 2017LinkGoogle Scholar

8 Reilly S, Planner C, Gask L, et al.: Collaborative care approaches for people with severe mental illness. Cochrane Database of Systematic Reviews 11:CD009531, 2013MedlineGoogle Scholar

9 Katon W, Sullivan MD: Depression and chronic medical illness. Journal of Clinical Psychiatry 51(suppl):3–14, 1990MedlineGoogle Scholar

10 Hunkeler EM, Katon W, Tang L, et al.: Long term outcomes from the IMPACT randomised trial for depressed elderly patients in primary care. BMJ 332:259–263, 2006Crossref, MedlineGoogle Scholar

11 Engel CC, Oxman T, Yamamoto C, et al.: RESPECT-Mil: feasibility of a systems-level collaborative care approach to depression and post-traumatic stress disorder in military primary care. Military Medicine 173:935–940, 2008Crossref, MedlineGoogle Scholar

12 Oslin DW, Ross J, Sayers S, et al.: Screening, assessment, and management of depression in VA primary care clinics: the Behavioral Health Laboratory. Journal of General Internal Medicine 21:46–50, 2006Crossref, MedlineGoogle Scholar

13 Chambers DA, Norton WE: The Adaptome: advancing the science of intervention adaptation. American Journal of Preventive Medicine 51(suppl 2):S124–S131, 2016Crossref, MedlineGoogle Scholar

14 Valenstein M, Adler DA, Berlant J, et al.: Implementing standardized assessments in clinical care: now’s the time. Psychiatric Services 60:1372–1375, 2009LinkGoogle Scholar