The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
Taking IssueFull Access

Fidelity, Adherence, and Robustness of Interventions

Published Online:https://doi.org/10.1176/appi.ps.52.4.413

A central question in mental health services research is how well an intervention of demonstrated efficacy works when applied under less-than-ideal conditions in practice settings. Most implementation studies have not systematically assessed important deviations from ideal protocols that commonly occur and the impacts of those deviations on outcomes. The vagaries of real-world practice settings make interventions vulnerable to variations. It is therefore important to address unavoidable departures from ideal conditions and to design robust interventions that will remain effective in practice settings. These concerns should be incorporated into early phases of studies used to design interventions.

To achieve this goal, we must identify the essential components of an efficacious intervention, assess how each contributes to outcomes, and eliminate any unnecessary components. We also must identify common deviations, assess how they affect outcomes, and make an effort to address those deviations.

There are at least two ways to address deviations. First, we should consider redesigning the intervention to eliminate deviations. Second, if important deviations cannot be eliminated, we should consider redesigning the intervention to improve its robustness against those deviations. Consider, for example, an intervention that consists of six consecutive sessions that build on each other. When the intervention is implemented in practice settings, many patients might skip some sessions, which might result in poor outcomes. If a high level of adherence cannot be achieved, we might redesign the intervention to make the sessions less dependent on each other—for example, by including redundant material. We could then conduct an efficacy study of the redesigned intervention to assess the impact of nonadherence by varying experimentally the number of sessions attended and the level of dependence between sessions. We might find that the redesigned intervention is less beneficial than the original intervention of all six sessions but more beneficial than the original was when some sessions were skipped. Under this scenario, the redesign has improved the robustness of the intervention against nonadherence; it will also deliver better overall effectiveness if nonadherence is substantial.

This commentary was motivated in part by recent work in quality engineering that addresses similar issues in designing consumer products. This work uses experimental and statistical methods such as robust parameter design to achieve good performance under representative user conditions. We believe that research on mental health interventions can benefit from such techniques. To interested readers, we recommend as a starting point the review article by Nair and colleagues that appeared in Technometrics in 1992 (pages 127-161).