The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
Published Online:https://doi.org/10.1176/appi.ps.640419

In Reply: We appreciate the opportunity to respond to Bond’s thoughtful commentary on our brief report. It is important to contextualize this discussion. The evidence-based practice “movement” is sweeping public mental health. Proponents largely agree that these practices should be widely disseminated, that fidelity assessment is needed to ensure high-quality implementation, and that assessments are most valid when conducted by independent assessors. But there is a problem. The “need” for independent assessment far outstrips the capacity to provide it. This problem is exacerbated by increases in the numbers of interventions classified as evidence-based practices (now exceeding 100 [1]) and of sites implementing them and is particularly acute for onsite assessment, which requires up to three days of assessor time. Thus there is a need to identify alternate, less burdensome, yet valid assessment methods, such as self-report.

Although we agree with Bond’s cautions concerning self-assessment for fidelity ratings as usually conducted, we believe that these concerns are most relevant for self-rated fidelity and that carefully collected self-reported data can be a valid and sole source for independent fidelity raters. Moreover, because all fidelity assessment methods use some self-reported data, differences are a matter of degree. Our approach assumes that the chief source of self-report invalidity is subjectivity in defining items and data needed to make ratings and that most people will report accurately when asked directly and clearly. To establish more objective procedures, we created a detailed protocol to gather data to score scale items, piloting and revising it over several years. For example, instead of asking, “Do you provide 24-hour coverage?” we ask, “What percentage of clients in crisis directly talk to staff after hours?” Instead of asking, “Are you involved with 95% of admissions?” we ask, “Describe team involvement with the past ten admissions.” In addition, we use independent raters to score the self-reported data and do not permit self-scoring of items. We believe that self-presentation biases are most problematic when self-scoring is used. In our study, for example, self-reported fidelity generally produced lower scores than phone fidelity.

As detailed in our report, self-report can be reliable and valid when this approach is used. Moreover, in contrast to Bond’s generalizability concerns, preliminary results from an ongoing study support the validity of our self-report approach for teams naïve to fidelity assessment and for those with moderate experience. Also, we disagree with and are confused by Bond’s assertion that we endorse replacing onsite assessment with self-reported assessment. In fact, we proposed a stepped approach in which phone and self-reported assessment complement and supplement onsite assessment. Nevertheless, we agree that self-report should be reserved for evidence-based practices with well-articulated fidelity scales, that auditing procedures are needed to ensure accuracy, and that, to date, the advantage of self-reported over phone assessment appears minimal (2). We also agree that integrating self-report fidelity data into electronic records is a useful next step. However, the current state of the science is preliminary, and further research is needed to more carefully examine each of these important questions.

References

1 Chambless DL, Ollendick TH: Empirically supported psychological interventions: controversies and evidence. Annual Review of Psychology 52:685–716, 2001Crossref, MedlineGoogle Scholar

2 McGrew JH, Stull LG, Rollins AL, et al.: A comparison of phone-based and on-site assessment of fidelity for assertive community treatment in Indiana. Psychiatric Services 62:670–674, 2011LinkGoogle Scholar