The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
ArticleFull Access

Commentary: Fidelity Measurement and the CSI

In their article in this issue, Falloon and colleagues (1) present an innovative approach to measuring implementation of evidence-based practices. The approach differs in five ways from common conceptions of fidelity assessment described in the literature.

First, fidelity scales have been developed for program- and practitioner-level implementation (2), but I had not previously seen a measure of client-specific implementation. Additionally, as conceptualized by Falloon and colleagues, high fidelity requires evidence that the client incorporates the intervention into real-life settings. Initially, I viewed the authors' assumption as conflicting with our operational definition, which states that fidelity should be confined to actions that are under the clinician's control (3). Upon reflection, I concluded that responsibility for attaining real-world implementation should be shared by the clinician and the client. For example, our fidelity scales do not attribute treatment dropout to lack of motivation. Instead, dropout often reflects a lack of assertive outreach.

Second, the authors have proposed negative anchor points reflecting detrimental behaviors on the part of practitioners, such as "unnecessary use of additional medications." This idea is an important advance in our thinking about implementation scales. A lack of fidelity is often reflected in a failure to intervene or in ineffective substitutes for evidence-based actions, such as providing office-based interventions when community-based interventions are far more powerful. However, a lack of fidelity can also occur when the interventions are actively detrimental, such as in the use of confrontation. Current fidelity scales have not adequately captured this dimension.

Third, most existing fidelity scales assess specific program models, but the Clinical Strategies Implementation Scale (CSI) focuses on the integration of all the evidence-based practices viewed as appropriate for a particular client. This integrative measurement strategy may circumvent the artificial boundaries that we confronted in developing the fidelity scales for the National Evidence-Based Practices Project (4), in which we found ourselves on the horns of the dilemma of whether to measure a program component for a particular practice when it was "covered" by another. For example, is behavioral tailoring a part of "medical management according to protocol," or is it part of "illness management and recovery," or both?

Fourth, unlike program-level fidelity scales, the CSI is compatible with how practitioners think about treatment planning. This measurement strategy may be especially helpful in solving some of the problems that have cropped up in developing fidelity scales for more clinically complex evidence-based practices, such as motivational interviewing. An individualized measurement approach avoids shortcomings of program-level fidelity scales such as the Dartmouth ACT Fidelity Scale (5), which have been criticized as being too focused on structural elements to the neglect of clinical competence.

Fifth, the authors propose a weighting scheme for scoring their scale that incorporates the relative impact of different practices for different client groups. They state that "it is relatively simple to develop a set of weightings." Ideally, the weights would be continuously updated as new evidence emerged. This idea is elegant, although I am not as sanguine as Falloon and colleagues are about its routine implementation. More articulation of the method is needed.

Ultimately, I believe a trifold approach is essential for both research and clinical purposes—measuring program, practitioner, and client levels of fidelity to evidence-based practice. We need tools that are pragmatic and will therefore be adopted. This is a tall order, and the method presented is far from this ideal, because it requires multiple experts and intensive scrutiny of the interventions used with each client. Our rule of thumb is that fidelity assessment should require no more than a one-day site visit by two independent assessors (3). More work will be needed to transform this expert-intensive methodology to one that is feasible for routine use. It is a challenge well worth the effort.

Dr. Bond is affiliated with the department of psychology at Indiana University-Purdue University Indianapolis, 402 North Blackford Street, Indianapolis, Indiana 46202 (e-mail, ).

References

1. Falloon IRH, Economou M, Palli A, et al: The Clinical Strategies Implementation Scale to measure implementation of treatment in mental health services. Psychiatric Services 56:1584–1590,2005LinkGoogle Scholar

2. Fixsen DL, Naoom SF, Blase KA, et al: Implementation Research: A Synthesis of the Literature. Tampa, University of South Florida, Louis de la Parte Florida Mental Health Institute, National Implementation Research Network, 2005Google Scholar

3. Bond GR, Williams J, Evans L, et al: Psychiatric Rehabilitation Fidelity Toolkit. Cambridge, Mass, Human Services Research Institute, 2000Google Scholar

4. Drake RE, Merrens MR, Lynde DW: Evidence-Based Mental Health Practice: A Textbook. New York, Norton, 2005Google Scholar

5. Teague GB, Bond GR, Drake RE: Program fidelity in assertive community treatment: development and use of a measure. American Journal of Orthopsychiatry 68:216–232,1998Crossref, MedlineGoogle Scholar