The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
Taking IssueFull Access

Can We Bridge the Knowing-Doing Gap?

Published Online:https://doi.org/10.1176/appi.ps.631214

A search for the term “evidence-based” yields almost 1,700 citations in Psychiatric Services over the past ten years. Each issue contains numerous studies, sometimes highly focused, sometimes covering broad issues or large populations, each purporting to provide new evidence to expand our knowledge and augment our clinical practice. Many carefully enumerate the limitations of their findings. Most call for further research.

Alas, I fear that we are more involved and invested in collecting evidence than in using what we find.

As professor and teacher, I work to ensure that residents and medical students understand scientific methodology and research-based approaches to clinical decision making. As psychiatrist-in-chief of a large university hospital, I receive countless memos stressing evidence-based decision making. As a reviewer for a number of journals, I focus on assessing the quality of data collected, the methodology employed, and the analyses conducted. As a researcher, I struggle to ask the right questions and provide discussion of why the answers are important and useful.

Sometimes I wonder why I bother.

I'd like to believe that practitioners are aware of the results of clinical trials, read articles comparing interventions, and make decisions reflecting their assessment of these findings. The sad truth is I rarely find evidence to support this belief. Ask 100 psychiatrists if a single antipsychotic medication is superior to all others in its efficacy in the treatment of schizophrenia, and they will all know the answer. “It’s clozapine.” Ask 100 clinic directors how many patients in their services are treated with clozapine and how many should be, and the apologies and explanations begin. Ask colleagues if they read any psychopharmacology trials. Although many know the literature, the N=1 study (“I tried my patient on that medication and he did really, really well, so that’s what I use”) seems far too common.

Ask a group of residents what the greatest cause of treatment failure is, and all respond “nonadherence.” Yet most state that they received less than one hour (the mode is zero) of formal training on improving adherence. Question colleagues about the helpfulness of case management, the provision of housing for homeless clients, or the need for longer, more frequent contact with their patients. Everyone knows the evidence supporting these approaches. Most of us know what to do. We just don't do it.

What stands between what we know and what we do? Is it an underlying skepticism about research findings? Burnout? Laziness? Payment issues? Understanding our failure to translate evidence into practice is one of the burning issues in our field. How can we bridge the knowing-doing gap? Now that’s translational research I’d like to see done!

Department of Psychiatry and Behavioral Sciences, SUNY Downstate Medical Center, Brooklyn