The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
Published Online:https://doi.org/10.1176/appi.ps.201700029

Abstract

Objective:

Use of expert-led workshops plus consultation has been established as an effective strategy for training community mental health (CMH) clinicians in evidence-based practices (EBPs). Because of high rates of staff turnover, this strategy inadequately addresses the need to maintain capacity to deliver EBPs. This study examined knowledge, competency, and retention outcomes of a two-phase model developed to build capacity for an EBP in CMH programs.

Methods:

In the first phase, an initial training cohort in each CMH program participated in in-person workshops followed by expert-led consultation (in-person, expert-led [IPEL] phase) (N=214 clinicians). After this cohort completed training, new staff members participated in Web-based training (in place of in-person workshops), followed by peer-led consultation with the initial cohort (Web-based, trained-peer [WBTP] phase) (N=148). Tests of noninferiority assessed whether WBTP was not inferior to IPEL at increasing clinician cognitive-behavioral therapy (CBT) competency, as measured by the Cognitive Therapy Rating Scale.

Results:

WBTP was not inferior to IPEL at developing clinician competency. Hierarchical linear models showed no significant differences in CBT knowledge acquisition between the two phases. Survival analyses indicated that WBTP trainees were less likely than IPEL trainees to complete training. In terms of time required from experts, WBTP required 8% of the resources of IPEL.

Conclusions:

After an initial investment to build in-house CBT expertise, CMH programs were able to use a WBTP model to broaden their own capacity for high-fidelity CBT. IPEL followed by WBTP offers an effective alternative to build EBP capacity in CMH programs, rather than reliance on external experts.

Efforts to bring evidence-based practices (EBPs) to scale in large community mental health (CMH) systems require capacity building through sustainable resources and strategies for maintaining adequate expertise in the face of high turnover rates (1), role changes, and other barriers (24). In CMH systems, effective approaches for building capacity require strategies that are both financially feasible and time-efficient, given limited resources and high work demands. This study examined a two-phase capacity-building model that establishes in-house expertise and then builds on that expertise by using computer technology.

Typically, EBP expertise is built through a knowledge acquisition phase, followed by consultation with experts (5). Historically, knowledge acquisition has relied on costly and time-intensive in-person workshops (6); however, more recent efforts have shifted toward Web-based training. Although Web-based training offers a convenient, self-paced approach to learning, concerns have been raised regarding retention in training of Web-based trainees. Whereas some studies report high retention (78%−100%) (710), many report significantly lower retention in Web-based training compared with in-person training (64% versus 95%) (11). These retention rates raise questions about whether Web-based training is an adequate resource for capacity building.

After the knowledge acquisition phase is complete, expert-led consultation has been identified as vital to develop EBP expertise and maintain behavior change (5,1216). Furthermore, a greater number of hours spent in consultation is associated with increased adherence to EBP and sustained skills (17,18). Studies combining Web-based training with expert-led consultation have resulted in significant improvement in trainees’ skills (10,17,19,20). However, the high cost and limited availability of experts are barriers in lower-resource settings (21). Train-the-trainer models, which train a small number of staff to train others in an EBP, have also been used to maintain adequate EBP expertise (22). Although the train-the-trainer model is effective for enhancing trainees’ knowledge and skills (23,24), trainer turnover leaves programs vulnerable to loss of EBP expertise (25).

Utilizing the collective expertise of an initial trained cohort to train and support new clinicians within a program may present a better approach for sustaining expertise in an EBP, because the knowledge is distributed among many staff, making it more robust to staffing changes. This study examined the outcomes of a two-phase model used to build capacity for an EBP in a large CMH system in the context of an ongoing program evaluation project. In the first phase, cohorts of trainees participated in in-person training workshops, followed by weekly consultation led by experts (in-person, expert-led [IPEL] phase), which has been found to be effective (26). Once this phase was completed, a second phase was used to maintain and expand expertise in the EBP. The second phase included Web-based training followed by consultation led by peers trained in the IPEL model (Web-based, trained-peer [WBTP] phase). This study evaluated the effectiveness and clinician retention of this approach and examined whether WBTP was not inferior to IPEL in EBP knowledge acquisition, EBP competency, and clinician retention.

Methods

Setting and Participants

This study analyzed data from an archival, deidentified data set that was collected in the context of an ongoing program evaluation project, the Beck Community Initiative (BCI) (27,28). The University of Pennsylvania Institutional Review Board deemed the study to be exempt as authorized by 45 CFR 46.101, category 4. The BCI aims to improve the quality of care for persons in recovery by using implementation strategies to infuse cognitive-behavioral therapy (CBT) into CMH services. This data set consists of a subset of 362 clinicians from 29 programs, trained by the BCI from 2007 to 2015 using IPEL (N=214) and WBTP (N=148). Table 1 presents background information for clinicians. Table 2 presents information about the programs. The influence of type of program setting (outpatient versus nonoutpatient) on main outcomes of the BCI (for example, competence and retention) did not vary significantly (29).

TABLE 1. Characteristics of clinicians in two phases of training in the Beck Community Initiative

IPELa(N=214)WBTPb (N=148)
CharacteristicN%N%
Genderc
 Female1677811477
 Male47223322
Years of clinical experienced
 0–367315436
 4–1053256343
 11–2025121812
 ≥21157139
Discipline
 Social work68326544
 Other134638356
Theoretical orientation
 Cognitive-behavioral therapy49234228
 Other112529464
Baseline CTRS score (M±SD)e21.7±8.0420.58±6.74

aIn-person, expert-led phase

bWeb-based, trained-peer phase

cOne person in the WBTP phase did not specify gender.

dIPEL trainees had 2.5 more years of experience than WBPT trainees (t=2.70, df=290, p=.007). No significant differences were found between IPEL and WBTP for any other variable in the table.

ePossible scores on the Cognitive Therapy Rating Scale (CTRS) range from 0 to 66, with higher scores indicating greater competency.

TABLE 1. Characteristics of clinicians in two phases of training in the Beck Community Initiative

Enlarge table

TABLE 2. Number of types of programs in the Beck Community Initiative and number of clinicians in two phases of training, by program type

TypeClinicians
Program (N=29)IPELa (N=214)WBTPb (N=148)
General outpatient 139993
School-based program63934
Substance abuse treatment 4304
Residential treatment32111
Assertive community treatment 2206
Day program150

aIn-person, expert-led phase

bWeb-based, trained-peer phase

TABLE 2. Number of types of programs in the Beck Community Initiative and number of clinicians in two phases of training, by program type

Enlarge table

Procedures

IPEL phase.

IPEL began with 22 hours of an in-person CBT workshop, followed by six months of weekly, two-hour group consultation, led by doctoral-level CBT experts. The average number of clinicians in each consultation group was seven (range six to eight). Consultation focused on applying CBT, including review of audio-recorded sessions. Initially, instructors led consultation meetings, and as the training progressed, group leadership shifted to group members. At the conclusion of the IPEL phase, the clinician group continued meeting as a peer-led consultation group (27,28).

Over the course of the IPEL phase (that is, workshop plus consultation), for each CMH program, each BCI instructor spent an average of 84 hours in training activities, including adaptation of training materials (10 hours), delivery of workshops (22 hours), and leading 26 two-hour weekly consultations (52 hours).

WBTP phase.

Once a program’s IPEL cohort of clinicians shifted to internal group consultation, the program transitioned to the WBTP phase, which was designed to broaden CBT capacity beyond the initial group and replace clinicians lost to turnover and to other staffing changes. The average number of WBTP clinicians per program was five (range none to 16).

The Web-based training content was based on the IPEL core training curriculum and contained the same information as the in-person workshops. Videotaped role-playing, on-screen activities, and quizzes were added to increase engagement. The Web-based training was self-paced and Internet accessible, and all content was available in English or Spanish. Clinicians were required to complete the Web-based training within six weeks, after which they joined their program’s ongoing consultation group. A BCI instructor visited internal consultation groups every six to eight weeks to provide support and to help resolve any barriers to sustained practice.

In the WBTP phase, over the course of a 7.5-month training period, the average BCI instructor time spent in training activities was 6.5 hours, which was 8% of the 84 hours required to complete training activities for the IPEL phase.

BCI participation requirements for both IPEL and WBTP.

Clinicians’ audio-recorded sessions were rated for CBT competency at three time points: prior to the consultation (baseline), middle of consultation (midconsultation), and end of consultation. Clinicians who completed the workshop (in person or Web based), attended at least 85% of consultations, submitted at least 15 recordings, and demonstrated CBT competency on their end-of-consultation audio submission (that is, a total score of ≥40 on the Cognitive Therapy Rating Scale [CTRS] [29]), received a certificate of competency. Clinicians who did not demonstrate competency on their end-of-consultation audio recording were allowed to submit additional recordings for evaluation of competency (referred to as the “competency assessment point”).

Measures

Competency.

The CTRS (29) is the observer-rated measure most frequently used to evaluate CBT competency (30). The 11 items assess general therapy skills (for example, interpersonal effectiveness) and CBT-specific skills (for example, focus on key cognitions). Each item is scored on a 7-point Likert scale (0, poor, to 6, excellent), with total scores ranging from 0 to 66. A cutoff total score of 40 or higher is used in clinical trials to represent competent delivery of CBT (31). The CTRS has demonstrated adequate internal consistency and interrater reliability (32). CTRS raters were doctoral-level CBT experts, and demonstrated a high interrater reliability for the CTRS total score (intraclass correlation coefficient=.84).

CBT knowledge.

The CBT Knowledge Quiz is a 20-item multiple-choice test developed to assess knowledge of CBT principles and interventions. The quiz was administered at the pre- and postworkshop phase (that is, in-person or Web-based training). CBT Knowledge Quiz scores were the percentage of correct answers on the quiz.

Analytic Plan

Statistical analyses were conducted with SPSS, version 22 (33), and HLM, version 7 (34).

Propensity score calculation.

Propensity scores were added as covariates in all models to control for nonrandom assignment to IPEL and WBTP (35). Propensity scores were calculated by using trainees’ baseline data to create a probability model that assigned a probability from 0 (assigned IPEL) to 1 (assigned WBTP) to each trainee. The baseline variables used in propensity score calculations were baseline CTRS scores, theoretical orientation, discipline (that is, social work or other), program, and years of experience (Table 1). These variables have been selected for propensity score calculations in a past publication on BCI data as potentially having influenced assignment to condition or the outcomes of interest (26).

Aim 1: comparing competence for IPEL and WBTP.

Tests of noninferiority were conducted to assess whether WBTP was not inferior to IPEL at increasing CTRS scores by the end of consultation and the competency assessment points (36). Noninferiority tests are ideal when a new approach is not likely to offer greater improvement over a previously established approach, but when the new model is more affordable and efficient than the model already established to be efficacious (37).

First, two separate three-level hierarchical linear models (HLMs) were conducted to create an adjusted model used in noninferiority analyses. In both models, assessment point (level 1) was nested within trainee (level 2), which was nested within program (level 3). The three assessment points entered at level 1 included baseline, midconsultation, and end of consultation (model 1) and baseline, midconsultation, and competency assessment point (model 2). Phase (IPEL or WBTP) and propensity scores were added at level 2.

Noninferiority was established if the difference in adjusted CTRS scores between the two approaches was smaller than a predetermined clinically meaningful difference (that is, delta). Delta was set at 4.5 CTRS points on the basis of a statistically significant and reliable change criterion for competence (38). If the lower limit of a 95% confidence interval (CI) was less than 4.5 CTRS points, then the WBTP phase would be considered noninferior to the IPEL phase.

Two two-level hierarchical generalized linear models (HGLMs) were used to examine the influence of training phase on the probability that a clinician would reach competence by the end of consultation (model 1) and by the competency assessment point (model 2). Bernoulli-type models, in which every level 1 record corresponded to a clinician with a single binary outcome (1, reached competence, and 0, did not) were used. Phase and propensity scores were added at level 1. Clinicians were nested within programs (level 2). Variance components analyses determined differences across programs.

Aim 2: comparing knowledge acquisition and retention for IPEL and WBTP.

A two-level HLM with clinicians (level 1) nested within programs (level 2) was conducted to assess the influence of training phase on postworkshop CBT Knowledge Quiz score. Preworkshop CBT Knowledge Quiz score, propensity score, and training phase were added at level 1.

A two-level HGLM was used to examine the influence of training phase (IPEL or WBTP) on the probability that a clinician would complete the full training (workshop plus six months of consultation) with the same variables at levels 1 and 2 as specified above.

Next, each stage of training (that is, workshop and consultation) was examined separately. A Bernoulli-type HGLM was performed to determine the influence of training phase on the probability that a clinician would complete each stage. For all HGLMs, variance components analyses determined differences across programs.

To compare the length of time spent in the consultation stage for IPEL and WBTP trainees, a Cox regression model was performed. Clinicians who completed six months of consultation were right censored.

Results

Aim 1: Comparing Competence for IPEL and WBTP

CTRS scores.

Table 3 shows means and standard deviations of CTRS scores at each time point. Tests of noninferiority showed that WBTP was not inferior to IPEL at midconsultation (CI=−1.43 to .73), end of consultation (CI=−1.74 to 1.06), and competency assessment point (CI=−1.89 to .37); the lower limits of the CIs did not exceed the 4.5 noninferiority margin.

TABLE 3. CTRS scores at baseline, end of consultation, and competency assessment point for clinicians in two phases of training in the Beck Community Initiativea

Assessment pointIPELb (N=202)WBTPc (N=148)
MSDMSD
Unadjusted model
 Baseline 33.097.2832.947.33
 End of consultation 38.529.3839.357.76
 Competency assessment point 40.768.2642.066.94
Adjusted modeld
 Baseline 33.434.7133.794.25
 End of consultation 38.335.6138.675.33
 Competency assessment point 40.314.5441.074.26

aPossible scores on the Cognitive Therapy Rating Scale (CTRS) range from 0 to 66, with higher scores indicating greater competency. A cutoff total score of 40 or higher is used in clinical trials to represent competent delivery of cognitive-behavioral therapy.

bIn-person, expert-led phase

cWeb-based, trained-peer phase

dAdjusted model represents the hierarchical linear models with propensity score as a covariate and training phase as a predictor.

TABLE 3. CTRS scores at baseline, end of consultation, and competency assessment point for clinicians in two phases of training in the Beck Community Initiativea

Enlarge table

Competence.

The number of clinicians reaching competence at the end of consultation were 117 (61%) for the 192 IPEL clinicians who reached this point and 51 (48%) for the 106 WBTP clinicians. The odds of reaching competence at the end of consultation were lower for a WBTP trainee than for an IPEL trainee (odds ratio [OR]=.83, CI=.44–1.59). This OR was not statistically significant, but it varied significantly across programs (χ2=32.03, df=18, p=.02).

The number of clinicians reaching competence at the competency assessment point were 150 (78%) for the 192 IPEL clinicians who reached this point and 73 (69%) for the 106 WBTP clinicians. The odds of a WBTP trainee reaching competence at the competency assessment point were greater than for an IPEL trainee (OR=1.41, CI=.81–3.00). This OR was not statistically significant and did not vary significantly across programs.

Aim 2: Comparing Knowledge Acquisition and Retention for IPEL and WBTP

Knowledge acquisition.

Training approach did not significantly influence postworkshop CBT Knowledge Quiz scores. This relation did not vary significantly across programs.

Retention.

Retention in the full training program was lower for WBTP than for IPEL: 192 (90%) IPEL completers versus 106 (72%) WBTP completers. The odds of not completing the full training were 3.33 times greater for a WBTP trainee than for an IPEL trainee (CI=2.23–5.46). This OR was significant (p<.001) for trainee type (WBTP vs. IPEL). Additional analyses indicated that the OR did not vary significantly across programs.

In the knowledge acquisition stage, WBTP had lower retention than IPEL: 211 (99%) IPEL completers versus 138 (93%) WBTP completers. The odds of not completing the knowledge acquisition stage were 3.88 times greater for a WBTP trainee than for an IPEL trainee (CI=2.02–7.46). This OR was significant (p<.001) and did not vary significantly across programs.

In the consultation phase, the mean±SD number of days in consultation for IPEL clinicians was 91.58±53.97, compared with 62.84±49.06 for WBTP clinicians. Among trainees who entered the consultation phase, 192 (91%) of the 211 IPEL trainees and 106 (77%) of the 138 WBTP trainees completed six months of consultation. The likelihood of not completing the consultation phase was 2.63 times greater among WBTP clinicians compared with IPEL clinicians (B=–.968, SE=.296, Wald=10.71, df=1, p<.001; hazard ratio=2.63, CI=1.00–4.70) (Figure 1).

FIGURE 1.

FIGURE 1. Cumulative survival analysis indicating the proportion of clinicians who completed the consultation stage of two training phases, in daysa

aIPEL, in-person, expert-led phase; WBTP, Web-based, trained-peer phase

Discussion

This study demonstrated that Web-based training followed by consultation with trained peers resulted in CBT competency that was not inferior to that of clinicians trained by experts in an in-person workshop and consultation. In addition, the likelihood of reaching competence did not differ significantly between the two training phases. Consistent with previous research (7,10,11,17,39), knowledge acquisition through Web-based training did not differ from that achieved through an in-person workshop. Given the need for effective, sustainable, and efficient strategies for bringing EBPs to CMH systems, the noninferiority of the WBTP model, which required 8% of the resources of IPEL, may offer a viable alternative to more expensive expert-dependent models and potentially fragile train-the-trainer approaches (25).

Consistent with prior research, WBTP clinicians were more likely than IPEL clinicians not to complete training. Possible reasons for the lower completion rate in WBTP include differences in comfort, buy-in, motivation, and accountability. These aspects of training may have been higher for IPEL trainees, who completed the process with a cohort and received more frequent contact and feedback from BCI instructors, compared with WBT trainees, who completed WBT independently and then joined an already established group.

Several strategies could be used to address these differences between training phases. WBTP trainees could join the training with a cohort of trainees within or across programs (connected via message boards or electronic mailing lists, such as Listserv). In addition, strategies that provide internal support and incentives, such as setting performance expectations in which completion of the WBTP phase aligns with job responsibilities and providing pay increases for those certified in an EBP, may help address buy-in and accountability (40). Furthermore, certification of CBT supervisors increases supervisors’ ability to support WBTP trainees, providing more frequent contact and feedback, such as in the IPEL phase. Future research is needed to examine and address barriers to retention.

Of note, the retention rate observed in this study was higher than rates reported in similar studies using Web-based training followed by expert-led consultations (7,911,41). Given that these studies typically included shorter consultation periods, expert-led consultations, and monetary incentives for participation, the retention rate observed for WBTP indicates the strength of the current model for retaining clinicians. Furthermore, lower retention may be a tenable tradeoff for the lower cost of WBTP compared with IPEL, given that WBTP required 8% of the training resources to achieve noninferior competency outcomes.

Beyond the need for less time of experts in WBTP, this sustainable resource and strategy for building expertise may offer additional benefits. Web-based training can be completed during free periods in the workday, which may be more feasible for clinicians than in-person workshops. Also, the Web-based training content can be made available in multiple languages (it was available in Spanish in this project), allowing for engagement of more clinicians. Finally, the IPEL-WBTP model may be more robust to turnover, because CBT expertise is distributed among the entire previously trained cohort rather than among a few individuals (as in some train-the-trainer models).

These findings offer strong support for the use of WBTP to build capacity; however, several limitations should be considered when interpreting these findings. Randomization of clinicians and programs was not possible because data came from program evaluation of an ongoing initiative that uses this phased approach. Propensity scores were used to approximate randomization by balancing potentially confounding baseline variables that may have influenced results. Nevertheless, the variables selected may not have accounted for unmeasured between-group differences that influenced outcomes. In addition, use of a knowledge quiz developed specifically for the BCI made it difficult to compare knowledge acquisition achieved through this initiative to that achieved in other training programs. Other studies have also developed their own knowledge tests (7), suggesting that this may be common practice in the field; differences in the focus of each training initiative may necessitate tailored knowledge measures specific to the constructs deemed most important in a particular approach. Finally, BCI instructors were aware of the training approach when rating session recordings. High rates of interrater agreement, however, suggest that instructors rated tapes consistently across both WBTP and IPEL.

Additional research is needed to compare this approach of IPEL followed by WBTP directly to other EBP capacity-building approaches, such as train-the-trainer models (25). Furthermore, although estimates of expended instructor time were calculated, the focus of this study was not on cost-effectiveness. Future studies should examine the cost-effectiveness of different capacity-building approaches, including costs associated with developing Web-based training as well as productivity losses related to time spent in training.

Conclusions

This study, as part of an ongoing CBT implementation effort, provides data on an effective and efficient phased approach to building capacity for an EBP in a CMH system and informs efforts to implement and increase use of EBPs within these systems. Given the need for feasible and affordable approaches to create sustainable resources and maintain EBP expertise over time, it is important to continue to identify strategies to build capacity for EBPs in large mental health systems.

Dr. German, Dr. Adler, Ms. Pinedo, Dr. Beck, and Dr. Creed are with the Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia. Dr. Frankel is with the Columbia University Clinic for Anxiety and Related Disorders, Columbia University Medical Center, New York. Dr. Stirman is with the National Center for PTSD Dissemination and Training Division, U.S. Department of Veterans Affairs, Menlo Park, California, and with Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, California. Dr. Evans is with the American Psychological Association, Washington, D.C.
Send correspondence to Dr. German (e-mail: ).

The authors report no financial relationships with commercial interests.

References

1 Mor Barak ME, Nissly JA, Levin A: Antecedents to retention and turnover among child welfare, social work, and other human service employees: what can we learn from past research? A review and meta-analysis. Social Service Review 75:625–661, 2001CrossrefGoogle Scholar

2 Rabin BA, Brownson RC, Haire-Joshu D, et al.: A glossary for dissemination and implementation research in health. Journal of Public Health Management and Practice 14:117–123, 2008Crossref, MedlineGoogle Scholar

3 Swain K, Whitley R, McHugo GJ, et al.: The sustainability of evidence-based practices in routine mental health agencies. Community Mental Health Journal 46:119–129, 2010Crossref, MedlineGoogle Scholar

4 Johnson K, Hays C, Center H, et al.: Building capacity and sustainable prevention innovations: a sustainability planning model. Evaluation and Program Planning 27:135–149, 2004CrossrefGoogle Scholar

5 Edmunds JM, Beidas RS, Kendall PC: Dissemination and implementation of evidence-based practices: training and consultation as implementation strategies. Clinical Psychology: Science and Practice 20:152–165, 2013Crossref, MedlineGoogle Scholar

6 Khanna MS, Kendall PC: Bringing technology to training: Web-based therapist training to promote the development of competent cognitive-behavioral therapists. Cognitive and Behavioral Practice 22:291–301, 2015CrossrefGoogle Scholar

7 Sholomskas DE, Syracuse-Siewert G, Rounsaville BJ, et al.: We don’t train in vain: a dissemination trial of three strategies of training clinicians in cognitive-behavioral therapy. Journal of Consulting and Clinical Psychology 73:106–115, 2005Crossref, MedlineGoogle Scholar

8 Rakovshik SG, McManus F, Westbrook D, et al.: Randomized trial comparing internet-based training in cognitive behavioural therapy: theory, assessment and formulation to delayed-training control. Behaviour Research and Therapy 51:231–239, 2013Crossref, MedlineGoogle Scholar

9 Kobak KA, Craske MG, Rose RD, et al.: Web-based therapist training on cognitive behavior therapy for anxiety disorders: a pilot study. Psychotherapy 50:235–247, 2013Crossref, MedlineGoogle Scholar

10 Stein BD, Celedonia KL, Swartz HA, et al.: Implementing a Web-based intervention to train community clinicians in an evidence-based psychotherapy: a pilot study. Psychiatric Services 66:988–991, 2015LinkGoogle Scholar

11 Dimeff LA, Harned MS, Woodcock EA, et al.: Investigating bang for your training buck: a randomized controlled trial comparing three methods of training clinicians in two core strategies of dialectical behavior therapy. Behavior Therapy 46:283–295, 2015Crossref, MedlineGoogle Scholar

12 Aarons GA, Hurlburt M, Horwitz SM: Advancing a conceptual model of evidence-based practice implementation in public service sectors. Administration and Policy in Mental Health and Mental Health Services Research 38:4–23, 2011Crossref, MedlineGoogle Scholar

13 Nadeem E, Gleacher A, Beidas RS: Consultation as an implementation strategy for evidence-based practices across multiple contexts: unpacking the black box. Administration and Policy in Mental Health and Mental Health Services Research 40:439–450, 2013Crossref, MedlineGoogle Scholar

14 Stirman SW, Crits-Christoph P, DeRubeis RJ: Achieving successful dissemination of empirically supported psychotherapies: a synthesis of dissemination theory. Clinical Psychology: Science and Practice 11:343–359, 2004CrossrefGoogle Scholar

15 Herschell AD, Kolko DJ, Baumann BL, et al.: The role of therapist training in the implementation of psychosocial treatments: a review and critique with recommendations. Clinical Psychology Review 30:448–466, 2010Crossref, MedlineGoogle Scholar

16 Rakovshik SG, McManus F: Establishing evidence-based training in cognitive behavioral therapy: a review of current empirical findings and theoretical guidance. Clinical Psychology Review 30:496–516, 2010Crossref, MedlineGoogle Scholar

17 Beidas RS, Edmunds JM, Marcus SC, et al.: Training and consultation to promote implementation of an empirically supported treatment: a randomized controlled trial. Psychiatric Services 63:660–665, 2012LinkGoogle Scholar

18 Schwalbe CS, Oh HY, Zweben A: Sustaining motivational interviewing: a meta-analysis of training studies. Addiction 109:1287–1294, 2014Crossref, MedlineGoogle Scholar

19 Ruzek JI, Rosen RC, Garvert DW, et al.: Online self-administered training on PTSD treatment providers in cognitive-behavioral intervention skills: results of a randomized controlled trial. Journal of Traumatic Stress 27:703–711, 2014Crossref, MedlineGoogle Scholar

20 Rakovshik SG, McManus F, Vazquez-Montes M, et al.: Is supervision necessary? examining the effects of Internet-based CBT training with and without supervision. Journal of Consulting and Clinical Psychology 84:191–199, 2016Crossref, MedlineGoogle Scholar

21 Chen JA, Olin CC, Stirman SW, et al.: The role of context in the implementation of trauma-focused treatments: effectiveness research and implementation in higher and lower income settings. Current Opinion in Psychology 14:61–66, 2017Crossref, MedlineGoogle Scholar

22 Ruzek JI, Rosen RC: Disseminating evidence-based treatments for PTSD in organizational settings: a high priority focus area. Behaviour Research and Therapy 47:980–989, 2009Crossref, MedlineGoogle Scholar

23 Karlin BE, Ruzek JI, Chard KM, et al.: Dissemination of evidence-based psychological treatments for posttraumatic stress disorder in the Veterans Health Administration. Journal of Traumatic Stress 23:663–673, 2010Crossref, MedlineGoogle Scholar

24 Martino S, Ball SA, Nich C, et al.: Teaching community program clinicians motivational interviewing using expert and train-the-trainer strategies. Addiction 106:428–441, 2011Crossref, MedlineGoogle Scholar

25 Herschell AD, Kolko DJ, Scudder AT, et al.: Protocol for a statewide randomized clinical trial to compare three training models for implementing an evidence-based practice. Implementation Science 10:133, 2015Crossref, MedlineGoogle Scholar

26 Stirman SW, Pontoski K, Creed TA, et al.: A non-randomized comparison of strategies for consultation in a community-academic training program to implement an evidence-based psychotherapy. Administration and Policy in Mental Health and Mental Health Services Research 44:55–66, 2017Crossref, MedlineGoogle Scholar

27 Creed TA, Stirman SW, Evans AC, et al.: A model for implementation of cognitive therapy in community mental health: the Beck Initiative. Behavior Therapist 37:56–64, 2014Google Scholar

28 Creed TA, German R, Frankel SA, et al.: Implementation of transdiagnostic cognitive therapy in diverse community settings: the Beck Community Initiative. Journal of Consulting and Clinical Psychology 84:1116–1126, 2016Crossref, MedlineGoogle Scholar

29 Young JE, Beck AT: Cognitive Therapy Scale: Rating Manual. Philadelphia, University of Pennsylvania, Center for Cognitive Therapy, 1980Google Scholar

30 Beck JS: Cognitive Behavior Therapy: Basics and Beyond, 2nd ed. New York, Guilford, 2011Google Scholar

31 Shaw BF, Elkin J, Yamaguchi J, et al.: Therapist competence ratings in relation to clinical outcome in cognitive therapy of depression. Journal of Consulting and Clinical Psychology 67:837–846, 1999Crossref, MedlineGoogle Scholar

32 Vallis TM, Shaw BF, Dobson KS: The Cognitive Therapy Scale: psychometric properties. Journal of Consulting and Clinical Psychology 54:381–385, 1986Crossref, MedlineGoogle Scholar

33 IBM SPSS Statistics for Windows, Version 22.0. Armonk, NY, IBM Corp, 2013Google Scholar

34 Raudenbaush, SW, Bryk AS, Congdon R: HLM 7.01 for Windows [computer software]. Skokie, IL, Scientific Software International, Inc, 2013Google Scholar

35 Austin PC: An introduction to propensity score methods for reducing the effects of confounding in observational studies. Multivariate Behavioral Research 46:399–424, 2011Crossref, MedlineGoogle Scholar

36 Blackwelder WC: Current issues in clinical equivalence trials. Journal of Dental Research 83:C113–C115, 2004Crossref, MedlineGoogle Scholar

37 Vavken P: Rationale for and methods of superiority, noninferiority, or equivalence designs in orthopaedic, controlled trials. Clinical Orthopaedics and Related Research 469:2645–2653, 2011Crossref, MedlineGoogle Scholar

38 Branson A, Shafran R, Myles P: Investigating the relationship between competence and patient outcome with CBT. Behaviour Research and Therapy 68:19–26, 2015Crossref, MedlineGoogle Scholar

39 Dimeff LA, Koerner K, Woodcock EA, et al.: Which training method works best? A randomized controlled trial comparing three methods of training clinicians in dialectical behavior therapy skills. Behaviour Research and Therapy 47:921–930, 2009Crossref, MedlineGoogle Scholar

40 Godley SH, Garner BR, Smith JE, et al.: A large-scale dissemination and implementation model for evidence-based treatment and continuing care. Clinical Psychologist 18:67–83, 2011Google Scholar

41 Weingardt KR, Cucciare MA, Bellotti C, et al.: Randomized trial comparing two models of Web-based training in cognitive behavioral therapy for substance abuse counselors. Journal of Substance Abuse Treatment 37:219–227, 2009Crossref, MedlineGoogle Scholar