The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
LettersFull Access

Self-Assessed Fidelity: Proceed With Caution

Published Online:https://doi.org/10.1176/appi.ps.640418

To the Editor: In the March issue, the study of fidelity ratings of assertive community treatment (ACT) programs by McGrew and colleagues (1) compared agreement between ratings obtained from self-report and from phone interviews under conditions favoring high reliability. The investigators enlisted a volunteer sample of team leaders who were well versed in ACT fidelity to complete a detailed self-report protocol. Over the years, these teams had undergone multiple annual onsite fidelity reviews, and meeting ACT standards qualified them for generous Medicaid reimbursement rates. Knowing that researchers would be conducting follow-up interviews, respondents may have been especially scrupulous.

I doubt that findings from this one-time assessment provide realistic benchmarks for performance by ACT team leaders in routine practice. I am even more skeptical that self-assessment can serve as a practical substitute for site visits by independent assessors, except under extraordinary conditions. The teams required an average of about seven hours to complete the study’s self-assessment protocol (2), a substantial time commitment for busy team leaders. Compliance with a self-assessment protocol would depend on the specific circumstances; however, self-report procedures might devolve over time into hurriedly completed assessments (with missing data, as occurred in the study), especially when teams do not have access to technical assistance. If researchers must monitor and advise team leaders to ensure accurate assessments, I see no advantage of self-assessment over telephone-based assessment. Moreover, the authors’ suggestion to use self-report as a screen to decide whether programs need more stringent assessments seems to invite self-assessors to give their programs favorable ratings in order to avoid closer review.

My broader concern is the message that this study sends the mental health field. State mental health administrators and program leaders in underresourced service systems may overinterpret and misapply the findings, despite the authors’ caveats. Although the authors stress that self-reported fidelity is most appropriate for “stable, existing teams with good prior fidelity,” this study might be used as justification for wholesale adoption of self-assessment as an expedient alternative to independent fidelity reviews. The self-report approach might be extended to other evidence-based practices, even those with less precise fidelity scales. Most worrisome are self-report assessments conducted by practitioners, researchers, and others who have no direct experience with a model and who lack training in its fidelity scale. Unfortunately, misapplication of fidelity scales by unqualified users is already widespread. The research literature is filled with evaluations of purportedly “high-fidelity” programs that bear little resemblance to the original models. Inaccurate self-labeling of programs was widespread decades ago before dissemination of fidelity scales, and, unfortunately, this remains true today.

Unlike the authors, I endorse the use of self-assessment for quality improvement purposes. But self-assessment should be in addition to independent fidelity reviews, not a replacement. Self-monitoring of key fidelity indicators is invaluable in supervision, and this form of self-assessment should be completed frequently between independent fidelity reviews.

Finally, I note that the findings provide a foundation for a critical next step in fidelity measurement. By greatly reducing ambiguity in scale definitions, this study suggests the feasibility of automated scoring of selected fidelity items from electronic records, thereby increasing accuracy, decreasing duplication of reporting, facilitating rapid access, and enhancing supervision.

Dr. Bond is affiliated with the Dartmouth Psychiatric Research Center, Geisel School of Medicine at Dartmouth, Lebanon, New Hampshire.
References

1 McGrew JH, White LM, Stull LG, et al.: A comparison of self-reported and phone-administered methods of ACT fidelity assessment: a pilot study in Indiana. Psychiatric Services 64:272–276, 2013LinkGoogle Scholar

2 McGrew JH, Stull LG, Rollins AL, et al.: A comparison of phone-based and on-site assessment of fidelity for assertive community treatment in Indiana. Psychiatric Services 62:670–674, 2011LinkGoogle Scholar