The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
Reviews & OverviewsFull Access

Using Technology to Train Clinicians in Evidence-Based Treatment: A Systematic Review

Published Online:https://doi.org/10.1176/appi.ps.201900186

Abstract

Objective:

There is a critical shortage of clinicians trained in evidence-based treatments (EBTs). New technologies, such as Internet-based training, video conferences, and mobile applications, can increase accessibility to specialized training and enhance traditional face-to-face training. A systematic review was conducted to identify and summarize research on the use of technology to train clinicians in EBTs.

Methods:

An electronic database search of PsycINFO, PubMed, Medline, Web of Science, CINAHL, and the Cochrane Library was conducted in June 2018. Articles were independently coded and assessed for risk of bias by two reviewers using the National Heart, Lung, and Blood Institute’s Quality Assessment Tool for Controlled Intervention Studies.

Results:

Of the 7,767 citations initially identified, 24 articles met inclusion criteria. These articles described 21 training programs, including training for anxiety, depression, substance abuse, and eating disorder treatment. Most training programs were Internet based (N=19), and a majority of studies used a randomized controlled design (N=21). Most studies reported significant increases in clinician knowledge or skills, with small to large effect sizes. The methodological quality of studies ranged from good to poor. Many programs were limited by their use of completer analyses (i.e., only those who completed study included in analyses) and self-report measures.

Conclusions:

Technology has great potential for increasing availability of training opportunities for clinicians and increasing the workforce trained in EBTs. Although technology-assisted training programs are not without limitations, overall they promise a new era of facilitative learning that promotes the adoption of new clinical practices in a dynamic and efficient manner.

HIGHLIGHTS

  • There has been an increase in the use of technology, such as the Internet, video conferencing, and social media to train mental health clinicians in evidence-based treatments (EBTs) in order to fill the current gap in training.

  • Of the 24 studies identified in this review, only one received a quality rating of good, which highlights the challenges and limitations of research examining the use of technology to train mental health clinicians in EBTs.

  • Despite the limitations of the literature, overall results suggest that technology-based training can be just as effective as traditional didactic training in preparing clinicians in the use of EBTs.

There is a critical shortage of clinicians trained in evidence-based treatments (EBTs), and this shortage is a major public health concern because it limits patients’ access to effective mental health treatment (1, 2). Increasing clinician access to professional training on EBTs—a potential solution to the shortage—has been named a priority by the National Institute of Mental Health (2). Currently, the most common methods for training clinicians in EBTs remain workshops, therapy manuals, and live consultation or supervision (3).

Workshop-only methods are credited with increasing knowledge about EBTs, but they have been criticized for producing insignificant gains in attitudes, application of knowledge, and skills (4, 5). Manual-based training has been shown to be suboptimal compared with multicomponent training modalities (6, 7). Another common method of training clinicians in EBTs involves a two-step process in which the clinician completes a specialist training workshop given by an expert. The clinician is then supervised while providing treatment by someone experienced at delivering the treatment. This approach has yielded better outcomes, such as increased adherence and competence (1, 5). However, because of its high cost (8) and a lack of people qualified both to conduct the workshops and to provide clinical supervision, this method is incapable of meeting the demand for training (9).

Fairburn and Cooper (10) highlighted the need for new forms of training that are more cost-effective and scalable, and technology may be well suited for this purpose. The Internet, video conferencing, mobile applications, and other technologies provide a rare opportunity to increase accessibility to specialized training and enhance traditional face-to-face training while reducing training cost. Over the past 2 decades, there has been increased use of technology to provide clinicians with training in EBTs, particularly the use of Web-based training methods (11).

Two reviews to date have examined the use of Web-based training methods for clinicians. Calder and colleagues (12) conducted a systematic review of Web-based training methods for substance abuse counselors. Because of the small number of included studies, the authors were unable to draw definitive conclusions, although their findings suggested that Web-based training might be effective under certain conditions. Jackson and colleagues (13) also conducted a systematic review of Web-based training, in this case to train behavioral health providers in evidence-based practice across various disorders. They found that Web-based training may result in greater posttraining knowledge and skill acquisition compared with scores at baseline. However, their review included a mix of studies, ranging from case studies to randomized controlled trials (RCTs), which may have limited the conclusions, given that potential biases are likely to be greater for nonrandomized studies compared with RCTs (14).

This systematic review extends the work of Calder and colleagues (12) and Jackson and colleagues (13) by examining whether other types of technology, such as mobile applications and social media, have been used to train clinicians in EBTs and by including studies published after October 2017. The review uses a standardized quality assessment to rate bias in order to be able to draw more definitive conclusions regarding the effectiveness of using technology to train clinicians in EBTs.

Methods

Articles were identified through a search of the PsycINFO, PubMed, Medline, Web of Science, CINAHL, and Cochrane Library databases for articles published on or before June 30, 2018. The following key search terms were used: “Internet OR Internet-based OR web-based OR mobile app* OR smartphone app* OR technology” AND “therapist OR clinician” AND “training OR education.” The addition of an asterisk to a term captures all derivatives of the term (e.g., “app*” captures application and apps). (A sample search strategy is available in an online supplement to this review.)

The title, abstract, and/or full paper were assessed to determine which studies met the following inclusion criteria: sample comprised mental health workers, focus on technological intervention to train clinicians, outcome measures related to training, a comparison group included, and publication in an English-language, peer-reviewed journal. Single case reports, editorials, reviews, abstracts, and protocol papers were excluded. All potential studies were independently assessed by both authors. The reference sections of included articles were also hand-searched to identify other relevant studies. Additionally, the authors of the included articles were contacted and asked whether their research groups had published any additional articles. We then extracted and summarized information from the remaining articles using a data extraction sheet developed on the basis of the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) (15).

With the exception of studies describing secondary data analyses, between-group effect sizes (ESs) for the primary outcomes of each study were estimated by using Hedges’ g. A variation on Cohen’s d, this ES corrects for biases due to small sample sizes (16). For cases in which the primary outcomes were not specified or in which multiple measures of the same construct were examined, only the first outcome described in the Methods section of the article was reported. Hedges’ g ES may be interpreted with Cohen’s convention (17) for small (ES=0.2), medium (ES=0.5), and large (ES=0.8) effects. Negative results were adjusted to be positive for ease of interpretation. Given the heterogeneous quality of studies and the difficulty in extracting ESs from some of the data descriptions, a meta-analysis was not conducted.

Each study was assessed for risk of bias by using the Quality Assessment Tool for Controlled Intervention Studies from the National Heart, Lung, and Blood Institute (NHLBI) (18). This tool assesses bias in controlled trials by using 14 criteria, including the method of randomization, whether outcome assessors were blind, and an evaluation of participation rate. The criteria are rated as yes, no, cannot determine, not reported, or not applicable, and an overall rating of quality is provided for the study (good, fair, or poor). Both authors independently rated the quality of all 24 studies. To assess interrater reliability, Cohen’s kappa was used. The kappa coefficient obtained in the study was 0.81, which represents a high level of agreement (19). This systematic review adhered to the PRISMA-P guidelines (see online supplement).

Results

The database search resulted in 7,767 potentially relevant citations. Review of titles and abstracts resulted in 122 full-text articles to be considered for possible inclusion. Fifteen articles met the inclusion criteria. Nine additional articles from the hand search met inclusion criteria. (A PRISMA-P flow diagram of our search history is available in the online supplement.)

Description of Studies

Most articles included a randomized controlled design in which assessments were conducted before and after randomization. The sample sizes across the 24 studies ranged from 35 to 363. Most studies included mental health professionals, such as psychologists, psychiatrists, nurse practitioners, counselors, and social workers. Sixteen studies recruited participants from the community, two from university settings, three from medical centers, two from outpatient clinics, and one from an addiction unit. The largest number of studies were conducted in the United States (N=19), followed by the United Kingdom (N=2), Russia (N=2), and Australia (N=1). The main study characteristics are summarized in Table 1 (3, 6, 7, 2040).

TABLE 1. Summary of studies in which technology was used to train clinicians in evidence-based treatmenta

StudyDesignModalityDemographic characteristics of participantsProvider typeGroupsOutcomes
Primary outcomeMeasuresFindings/effect sizeb
Depression and anxiety
 Bennett-Levy et al., 2012 (20)RCT: pre, 12-week post, and 4-week FUOLT, phone or Skype supportN=49, 82% FPsychologist, social worker, nurse, counselor, and doctorPRAXIS CBT+telephone or Skype support (supported), N=24; PRAXIS CBT alone (independent), N=25CBT knowledgeCognitive Behavioral Therapy Questionnaire (CBT-Q)CBT-Qpost: supported=independent (g=.15). CBT-QFU: supported=independent (g=.09)
 Hubley et al., 2015 (21)RCT: pre, 90–120 minutes post, and 1-week FUOLTN=46, 80% F. White, 78% Psychiatrist, psychologist, psychiatric nurse practitioner, social worker, mental health counselor or therapist, and studentBA OLT, N= 32; attention control OLT (control OLT), N=14KnowledgeBA Knowledge Test (BAKT)BAKTpost: BA OLT>control OLT (g=1.11). BAKTFU: BA OLT>control OLT (g=1.27)
Anxiety-related disorder
 Chu et al., 2017 (3)RCT: pre, 10-week postOLTN=35, 89% F. Non-Hispanic Caucasian, 69%Postdegree professional or graduate traineeOLT+expert streaming (ES), N=13; OLT+peer consultation (PC), N=9; fact sheet self-study (FS), N=13CBT knowledgeKnowledge TestFS=ES (g=.27); FS=PC (g=.15); PC=ES (g=.13)
 Ehrenreich-May et al., 2016 (22)RCT: pre, 35-day post, and 90-day FUOLT, social media and phoneN=168, 75% F. White, 70% Psychologist, social worker, mental health counselor or therapist, student, and otherText-based treatment manual (TXT), N=51; TXT+OLT, N=44; TXT+OLT+learning community (TXT+OLT+LC), N=45KnowledgeKnowledge measure (KM)KMpost: TXT+OLT+LC=TXT+OLT (g=.09); TXT=TXT+OLT (g=.08); TXT+OLT+LC=TXT (g=0). KMFU: TXT+OLT+LC=TXT+OLT (g=.33); TXT=TXT+OLT (g=.15); TXT+OLT+LC=TXT (g=.45)
 Gega et al., 2007 (24)RCT: pre, 1-hour post-training1, 1 hour post-training2Computer softwareN=92Mental health nursing studentFearFighter software on computer, N=46; lecture, N=46Knowledge gainTwo 10-item multiple choice questionnaires (MCQ1, MCQ2)MCQ1post1: FearFighter=lecture (g=.09). MCQ2post2: FearFighter=lecture (g=.52)
 Harned et al., 2011 (25)RCT: pre, post, 1-week FUOLT, phoneN=46, 83% F. Caucasian, 74%Psychologist, RN/ARNP, social worker, master’s level counselor, bachelor’s level, high school/associate degreeExposure therapy (ET) OLT (ET OLT), N=15; ET OLT+motivational interviewing (MI) (ET OLT+MI), N=15; control OLT (control), N=16KnowledgeKnowledge measureKnowledge measurepost: ET OLT>control (g=2.13); ET OLT+MI>control (g=3.10); ET OLT=ET OLT+MI (g=.16). Knowledge measureFU: ET OLT>control (g=1.54); ET OLT+MI>control (g=1.89); ET OLT=ET OLT+MI (g=.10)
 Harned et al., 2014 (26)RCT: pre, 6-week post, 6- and 12-week FUOLT, online conferencingN=181, 71% F. Caucasian, 72% Psychiatrist, psychologist, psychiatric nurse practitioner, social worker, master’s-level counselor, bachelor's level, and studentOLT, N=60, OLT+motivational enhancement (ME) (OLT+ME), N=60; OLT+ME+learning community (LC) (OLT+ME+LC), N=61KnowledgeKnowledge measureKnowledge measurepost: OLT+ME+LC>OLT (g=1.98); OLT+ME+LC>OLT+ME (g=2.98); OLT=OLT+ME (g=.99). Knowledge measure6wk FU: OLT+ME+LC>OLT (g= 1.98); OLT+ME+LC>OLT+ME (g= 2.98); OLT=OLT+ME (g=.99)
 McDonough and Marks, 2002 (23)RCT: pre, 90 minutepostComputer softwareN=37, 54% FThird-year medical studentFearFighter software on computer (computer), N=19; small-group face-to-face teaching (tutorial), N=18Knowledge15-item multiple choice questions Tutorial=computer (g=.65)
 Rakovshik et al., 2016 (27)RCT: pre, midtraining, postOLT, SkypeN=61, 70% FPsychologist, psychiatrist, and psychiatrist– psychotherapistInternet-based training (IBT)+consultation worksheet (IBT-CW), N=19; IBT+Skype (IBT-S), N=22; no-training control (delayed training [DT]), N=20CBT skillsCognitive Therapy Scale (CTS)IBT-S>IBT-CW (g=.71); IBT-S>DT (g=1.31); IBT-CW=DT (g=.62)
Substance use disorder
 Larson et al., 2013 (30)RCT: pre, 8-week post, 3-month FUOLT, phoneN=127, 66% FAddiction counselorWeb training (Web course), N=62; manual training (control), N=65Adequate adherence to CBT deliveryA low pass or greater on at least 1 of 3 core CBT skills—generic skills, client-centered motivational stance, and CBT-specific skillsCounselors passing on 3 core skill outcomespost: Web course=controlc
 Sholomskas and Carroll, 2006 (31)RCT: pre, 3-week postCD-ROMN=28, 64% F. American Indian, 4%; black, 24%; white, 72%dSocial worker, primary clinician, psychiatric nurse/mental health assistant, occupational therapist, and otherManual plus computer-based training (CD-ROM+manual), N=12; manual only (manual), N=13Adherence to and skill in Twelve-Step FacilitationVideotaped role-plays rated on the Yale Adherence Competence ScaleRole-playspost: CD-ROM+manual>manual (g=.95). Skillpost: CD-ROM+manual>manual (g=.90)
 Sholomskas et al., 2005 (7)Non-RCT: pre, 4-week post, 3-month FUOLT, phone supervisionN=78, 54% F. AA, 27%; Hispanic, 8%; Caucasian, 61%; other, 4%Clinician treating a predominantly substance-using populationManual, N=27; manual+Web-based training (manual+Web), N=24; manual+didactic training (seminar and supervision) (manual+seminar+supervision), N=27Ability to demonstrate key CBT intervention strategiesStructured role-plays rated on the Yale Adherence Competence ScaleAdherencepost: manual+seminar+supervision>manual+Web (g=.38); manual+Web>manual (g=.47); manual+seminar+supervision>manual (g=.86). Skillpost: manual+seminar+supervision>manual+Web (g=.46); manual+Web>manual (g=.38); manual+seminar+supervision>manual (g=.83)
 Weingardt et al., 2006 (28)RCT: pre, 60-minute postOLTN=166, 55% F. AA, 21%; Asian/Pacific Islander, 6%;white, 55%; Latino, 12%; other, 4%; missing, 2% Substance abuse counselorsWeb-based training (WBT), N=52; face-to-face workshop (FTF), N=55; delayed-training control (control), N=59Knowledge about the Coping with Craving content17 multiple-choice questionsWBT>control; FTF>control; WBT=FTFc
 Weingardt et al., 2009 (29)RCT: pre, 1-month postOLTN=147, 62% F. AA, 20%; Caucasian, 67%; Hispanic, 8%; other, 6%; Asian/Pacific Islander, .7%Substance use disorder counselorStrict adherence to OLT course (high fidelity), N=73; flexible adherence to OLT course (low fidelity), N=74Knowledge73 multiple-choice questions on CBT knowledgeHigh fidelity=low fidelity (g=.12)
Substance use disorder and suicidality
 Dimeff et al., 2009 (6)RCT: pre, 2 days post (ILT)/45 days post (OLT/manual), 30-day and 90-day FUN=150, 70% F. White, 81%; Asian American, 7%; Hispanic/Latino, 3%; AA, 3%; Native American, 2%; other, 5%Psychiatrist, psychologist, chemical dependency counselor, master’s in social work, master’s level counselor, bachelor’s level counselor, and otherWritten treatment manual (manual), N=49; DBT skills OLT (OLT), N=54; instructor-led training (ILT), N=47KnowledgeDBT skills knowledge and application (KT)KTpost: OLT>manual (g=.51); OLT>ILT (g=.34); ILT=manual (g=.20). KTFU: OLT>manual (g=.51); OLT>ILT (g=.41); ILT=manual (g=.12)
 Dimeff et al., 2011 (32)RCT: pre; 2.5 hours post; and 2, 7, 11, and 15 weeks FUCD-ROMN=132, 74% F. Caucasian, 83%; Hispanic/ Latino, 9%; multiracial, 3%; Native American, 2%; AA, 1%; Asian American, 2%; Middle Eastern, 1%Psychiatric nurse, psychologist, psychiatric nurse practitioner, chemical dependency counselor, master’s in social work (M.S.W.), mental health counselor/ therapist, mental health counselor/technician, and otherDBT treatment manual (manual), N=43; DBT e-Learning course (e-DBT), N=47; control e-Learning course (e-control), N=42Knowledge of DBT distress tolerance skillsDBT Distress Tolerance Skills Knowledge and Application Test (KT)KTpost: e-DBT>e-control (g=3.4); manual>e-control (g=2.78); e-DBT=manual (g=.24). KTFU: e-DBT>e-control (g=1.86); manual>e-control (g=1.48); e-DBT>manual (g=.35)
 Dimeff et al., 2015 (33)RCT: pre, 2 days post (ILT), 30 days post (OLT/manual), 60-day and 90-day FUOLTN=200, 76% F. Caucasian, 79%; AA, 4%; Asian American, 6%; Hispanic, 4%; Native American, 2%; and other, 5%Psychologist, psychiatrist, psychiatric nurse practitioner, chemical dependency counselor, social worker (M.S.W.), mental health counselor (M.A./M.S./MFT), mental health counselor (B.A./B.S.), student, and otherOLT, N=66; instructor-led training (ILT), N=67; treatment manual (TM), N=67Satisfaction, self-efficacySatisfaction measure, adapted Behavioral Anticipation and Confidence Questionnaire (BAQ)Satisfaction measurepost: ILT>OLT (g=.62); ILT>TM (g=.86); OLT=TM (g=.21). BAQpost: ILT>OLT (g=.78); ILT>TM (g=.94); BAQFU: ILT>OLT (g=.87); ILT>TM (g=.82)
Posttraumatic stress disorder
Ruzek et al., 2014 (34)RCT: pre, 1-month postOLT, phone consultationN=168, 70% F. white, 74%; AA, 11%; non-white, Hispanic, or other, 18%VHA mental health clinician with master’s degrees or doctoral-level training in mental health or related disciplinesWeb training (Web), N=57; Web training+consultation (Web+consult), N=55; training as usual (control), N=56Intervention skills acquisitionStandardized patient evaluation of motivation enhancement and behavioral task assignmentMotivation enhancement: Web>control (g=.70); Web+consult>control (g=1.43); Web+consult>Web (g=.67). Behavioral task assignment: Web>control (g=.34); Web+consult>control (g=.64)
Bipolar disorder
 Stein et al., 2015 (35)RCT: 180 days post, 365 days and >365 days FUOLT, phone supervisionN=36social worker, licensed professional counselor, and clinical psychologist, nurseInternet-supported e-learning (e-learning), N=16; in-person training (IPT), N=20Extent to which clinicians used interpersonal and social rhythm therapy (IPSRT) techniquesPsychotherapy Practice Scale adapted for IPSRT (PPS-IPSRT) completed by patientsPPS-IPSRTpost: e-learning=IPTc. PPS-IPSRTFU: e-learning=IPTc
Eating disorder
 Cooper et al., 2017 (36)Pre, 20-week post, 6-month FUOLT, e-mail, or phone supportN=156, 93.3% FMainly clinical psychologists and social workersOLT alone (independent training,) N=75; OLT+telephone support (supported training), N=81Competence22 items addressing trainee knowledge and understanding of CBT-E and its implementati-on (i.e., applied knowledge)Competencepost: independent=supportedc. CompetenceFU: independent=supportedc
Autism
 Granpeesheh et al., 2010 (37)RCT: pre, 16- hour (in person) or 10-hour (e-learning) postComputer softwareN=88Entry-level behavioral therapiste-learning group, N=33; in-person training group (standard), N=55:Knowledge/ competenceWritten examination consisting of 32 long- and short-answer questionsStandard>e-learningc
Motivational interviewing
 Mullin et al., 2016 (38)Non-RCT: pre, 5-month postOLTN=34Psychologist, clinical social worker, medical student, family medicine resident, nurse practitioner, primary care/OB-GYN, physician, research staff, and otherOLT CITMI, N=14; in-person training, N=30Motivational interviewing skillsMotivational interviewing treatment integrity codeOnline training=in-person trainingc
General CBT skills
 German et al., 2018 (40)Pre, end of assessment (post1), and competency assessment point (post2)OLTN=362, 78% FCommunity cliniciansIn person, expert-led (IPEL), N= 214; Web-based, trained-peer (WBTP), N=148CompetencyCognitive Therapy Rating Scale (CTRS)CTRSpost1: IPEL=WBTP (g=.09); CTRSpost2: IPEL=WBTP (g=.17)
 Rakovshik et al., 2013 (39)RCT: pre, 1- month postOLTN=63, 91% FMaster’s level student of neuropathology and psychopathology or clinical psychology and psychotherapyImmediate access to Internet-based CBT training (immediate), N=31; Internet-based CBT training after 1-month wait (delayed), N=32CompetenceRatings of performance in the Objective Structured Clinical Exam (OSCE), a 20-minute role-play of a CBT assessment, and assessment of the quality of participant's’ formulation of the OSCE ”patient”CBT assessment and formulation skills: immediate>delayed (g=.90)

aAbbreviations: AA, African American; BA, behavioral activation; CBT, cognitive-behavioral therapy; CITMI, Certificate of Intensive Training in Motivational Interviewing; DBT, dialectical behavior therapy; F, female; FU, follow-up; IBT, Internet-based training; MFT, marriage and family therapist; ILT, instructor-led training; OLT, online training; RCT, randomized controlled trial; RN/ARNP, registered nurse/advanced registered nurse practitioner.

bBetween-group effect sizes for the primary outcomes of each study were estimated when possible by using Hedges’ g.

cStudy did not provide enough data to allow for calculation of effect size.

dDemographic information available for 25 participants.

TABLE 1. Summary of studies in which technology was used to train clinicians in evidence-based treatmenta

Enlarge table

Methodological Quality

Each study was assessed for risk of bias by two reviewers using the NHLBI’s Quality Assessment Tool for Controlled Intervention Studies (18). Eighteen studies were rated poor (3, 7, 2022, 24, 26, 27, 2934, 3638, 40), five were rated fair (6, 23, 25, 28, 35), and one was rated good (39) (Table 2). According to the NHLBI guidelines, good-quality studies include strict adherence to most NHLBI criteria. Poor-quality studies have one or more “fatal flaws” indicating high risk of bias, such as high overall or differential dropout rates and absence of intent-to-treat (12) or other suitable analyses. Fair-quality studies have limitations such as use of unreliable or invalid measures, dissimilar groups at baseline, and low adherence to intervention protocols.

TABLE 2. Results of risk-of-bias assessment among studies in which technology was used to train clinicians in evidence-based treatmenta

StudyQ1Q2Q3Q4Q5Q6Q7Q8Q9Q10Q11Q12Q13Q14Rating
Bennet-Levy et al., 2012 (20)YesNRNRNoNAYesYesNoYesYesNoNoYesNoPoor
Chu et al., 2017 (3)YesNRNRNoNAYesYesYesNoNRNoNoYesNoPoor
Cooper et al., 2017 (36)YesNoNoNoNANRNoYesYesNRYesNRYesNoPoor
Dimeff et al., 2009 (6)YesYesYesNoNRYesYesYesYesNRNoNRYesYesFair
Dimeff et al., 2011 (32)YesYesYesNoYesNoYesNoNRNRNoYesYesYesPoor
Dimeff et al., 2015 (33)YesYesYesNoNRYesYesNoNRNRNoYesYesNoPoor
Ehrenreich-May et al., 2016 (22)YesYesYesNoNRYesNoYesNRNRNoYesYesYesPoor
Gega et al., 2007 (24)YesYesYesNoYesYesYesYesNRNRNoYesYesNoPoor
German et al., 2018 (40)NoNANANoNRYesYesNoNoNRYesNRYesNAPoor
Granpeesheh et al., 2010 (37)YesYesNoNoYesNRYesNRNRNRNoNRYesNoPoor
Harned et al., 2011 (25)YesYesYesNoNAYesYesYesNRNRNoYesYesYesFair
Harned et al., 2014 (26)YesYesNoNoYesYesNoNoYesNRNoYesYesYesPoor
Hubley et al., 2015 (21)YesYesYesNoNAYesYesYesNRNRYesNRNoNoPoor
Larson et al., 2013 (30)YesNoNRNoNRNRCDCDYesNRNoNoYesNoPoor
McDonough and Marks, 2002 (23)YesYesYesNoNAYesYesYesNRNRNoNRYesYesFair
Mullin et al., 2016 (38)NoNANANAYesNoYesYesYesNRYesNRYesNAPoor
Rakovshik et al., 2013 (39)YesYesYesNoYesYesYesYesNRYesYesYesYesYesGood
Rakovshik et al., 2016 (27)YesYesYesNoYesNRNoYesNRNRYesYesYesYesPoor
Ruzek et al., 2014 (34)YesYesYesNoYesYesYesYesNoNoYesNoYesNoPoor
Sholomskas and Carroll, 2006 (31)YesNRNRNoYesYesYesNRNRNRYesNRYesNoPoor
Sholomskas et al., 2005 (7)NoNoNANoNRYesYesNoNoNRYesNRYesNoPoor
Stein et al., 2015 (35)YesNoNRNoYesNRYesYesYesNRNoNRYesYesFair
Weingardt et al., 2006 (28)YesNRNRNoNAYesYesYesNRNRNoNRYesYesFair
Weingardt et al., 2009 (29)YesNRNRNoNAYesNoYesNRNRNoNRYesNoPoor

aRisk of bias was assessed with the National Heart, Lung, and Blood Institute Criteria for Controlled Studies. Each criterion is assessed by the following questions, and an overall rating of quality is determined: Q1, Was the study described as randomized, a randomized trial, a randomized clinical trial, or a randomized controlled trial?; Q2, Was the method of randomization adequate (i.e., use of randomly generated assignment)?; Q3, Was the treatment allocation concealed (so that assignments could not be predicted)?; Q4, Were study participants and providers blinded to treatment group assignment?; Q5, Were the people assessing the outcomes blinded to the participants' group assignments?; Q6, Were the groups similar at baseline on important characteristics that could affect outcomes (e.g., demographic characteristics, risk factors, comorbid conditions)?; Q7, Was the overall dropout rate from the study at endpoint 20% or lower compared with the number allocated to treatment?; Q8, Was the differential dropout rate (between treatment groups) at endpoint 15 percentage points or lower?; Q9, Was there high adherence to the intervention protocols for each treatment group? Q10, Were other interventions avoided, or were there similarities between the groups in receipt of other interventions (e.g., similar background treatments)?; Q11, Were outcomes assessed by using valid and reliable measures, and were they implemented consistently across all study participants?; Q12, Did the authors report that the sample size was sufficiently large to be able to detect a difference in the main outcome between groups with at least 80% power?; Q13, Were outcomes reported or subgroups analyzed prespecified (i.e., identified before analyses were conducted)?; Q14, After randomization, were all participants analyzed in the group to which they were originally assigned, i.e., was an intention-to-treat analysis used? CD, cannot determine; NA, not applicable; NR, not reported.

TABLE 2. Results of risk-of-bias assessment among studies in which technology was used to train clinicians in evidence-based treatmenta

Enlarge table

Training Programs

Studies on depression and anxiety.

Two studies examined using technology to train clinicians in treating depression and anxiety. Both were rated as having poor quality. Bennett-Levy and colleagues (20) used PRAXIS CBT for Common Mental Health Problems, a 12-week, 30-module online training (OLT) program consisting of 60-minute modules for training clinicians in rural and remote areas of Australia in cognitive-behavioral therapy (CBT) for depression and anxiety disorders. Clinicians were randomly assigned to PRAXIS CBT alone (independent training) or PRAXIS CBT plus 15 minutes of supervision by telephone or Skype (supported training). There were no significant group differences in CBT knowledge scores at postassessment and at follow-up. Completion rates were significantly higher among those in supported training (96%) compared with independent training (76%).

Similarly, Hubley and colleagues (21) used an OLT program to train clinicians in behavioral activation (BA) principles and treatment strategies. Participants were randomly assigned to receive either BA OLT or a placebo titled “DBT Validation Strategies” (control OLT). BA OLT consisted of 81 screens organized into six modules on BA principles and took 90 to 120 minutes to complete. Control OLT participants received dialectical behavior therapy (DBT) “validation strategies,” which instructed them on how to validate a client in therapy and were comparable to BA OLT in quality, length, and design elements. BA OLT participants scored significantly higher than control OLT participants on a BA knowledge test at postassessment and follow-up, with large ESs. BA OLT participants rated the training course as both relevant and usable.

Studies on anxiety-related disorders.

Seven studies examined use of technology to train clinicians in treating anxiety-related disorders. Five studies were rated as having poor quality, and two were rated as fair. Chu and colleagues (3) compared the effectiveness of various low-cost extended support methods following initial participation in “Evidence-Based Treatment for Anxiety Problems: Cognitive Behavioral Strategies,” a 6.5-hour online workshop on CBT for anxious youths. Following the workshop, clinicians were randomly assigned to either 10 weeks of streaming content with an expert (i.e., weekly video of an expert providing supervision to trainees), peer consultation (i.e., 1-hour weekly peer-led groups to discuss their current caseload), or weekly review of a one- to two-page fact sheet. No significant group differences were observed on a CBT knowledge test. Notably, scores on the knowledge test and self-reported beliefs about knowledge and skill decreased from pre- to postassessment. There were also no significant group differences in satisfaction ratings.

Ehrenreich-May and colleagues (22) examined the effectiveness of a 12-week online program to train clinicians in CBT for adolescent panic disorder on the basis of the Mastery of Anxiety and Panic for Adolescents treatment manual (TM) (MAP-A) (41). Clinicians were randomly assigned to receive either the MAP-A manual (TXT); the manual plus MAP-A OLT (TXT+OLT); or the manual plus OLT and a learning community (LC) (TXT+OLT+LC), which included weekly group conference calls and online discussions facilitated via Twitter. There were no significant differences between groups in knowledge scores at post- and follow-up assessment. TXT participants were significantly less satisfied with their training than their counterparts.

Two studies examined the use of FearFighter, a nine-session, self-help online program for training clinicians to treat people who have panic disorder and specific phobias. McDonough and Marks (23) compared computer-assisted instruction with face-to-face teaching in training third-year medical students in exposure therapy (ET). All participants first received a 20-minute lecture on CBT before being randomly assigned to computer-assisted instruction or the face-to-face tutorial. Those in the computer-assisted group used a shortened computer version of FearFighter for 90 minutes. Participants in the tutorial group received a 90-minute tutorial in ET. There were no significant group differences at postassessment. Tutorial-group participants reported significantly higher satisfaction ratings than participants in the computer-assisted group. Gega et al. (24) randomly assigned mental health nursing students to 1 hour of training in either FearFighter or a lecture group. After participants completed postassessments, they crossed over to the opposite group and completed an additional hour of training and postassessments. There were no significant group differences in knowledge scores at either postassessment point. Additionally, there were no group differences in satisfaction ratings.

Two studies examined the effectiveness of Foundations of Exposure Therapy (FET), a 10-hour online program to train clinicians in ET for anxiety disorders. Harned and colleagues (25), testing an early version of the program, randomly assigned clinicians to ET OLT alone, ET OLT plus motivational interviewing (MI) (ET OLT+MI), or placebo OLT titled “DBT Validation Strategies” (control OLT). Participants in the ET OLT+MI group received one to two brief phone calls based in MI to reduce ambivalence about adopting ET. The active training groups had significantly higher knowledge scores than the control OLT group at posttraining and at 1-week follow-up, with large ESs, but they did not differ significantly from each other. Participants in both active conditions rated their training comparably, and they found training significantly more acceptable than did control OLT participants.

Harned and colleagues (26) further elaborated on these results by randomly assigning a larger group of participants to either FET OLT alone, FET OLT plus motivational enhancement (ME) (FET OLT+ME), or FET OLT+ME plus a Web-based LC (FET OLT+ME+LC). Those assigned to FET OLT+ME received an additional motivational enhancement intervention to address attitudinal barriers to using ET. ME was a two-phase intervention that included watching a 5-minute video and having simulated conversation with a virtual ET consultant following completion of FET OLT. Participants assigned to FET OLT+ME+LC were allowed to join a Web-based LC that provided support during and after FET OLT+ME training. It consisted of eight 1-hour meetings held via an online conferencing platform over 12 weeks. Participants in the FET OLT+ME+LC group had significantly higher knowledge scores than their counterparts at postassessment and at follow-up, with large ESs. There was no significant difference between the FET OLT and OLT+ME groups. There were also no significant group differences in satisfaction with the FET program.

Finally, Rakovshik and colleagues (27) compared Internet-based training in CBT for anxiety disorders with and without supervision among a sample of Russian and Ukranian clinicians. This training consists of 20 hours of online presentations from the Oxford Cognitive Therapy Center that are completed across 3 months. Participants randomly assigned to the Internet-based training plus consultation worksheets group completed one translated version of Padesky’s consultation worksheet per month during training. Those assigned to Internet-based training plus Skype supervision completed one consultation worksheet and received three 30-minute individual supervision sessions per month. Participants in the nontraining control group received no training during a 3-month wait period. Participants with Skype supervision had significantly higher CBT competence scores at posttraining than consultation worksheet participants and participants in the control group, who did not differ significantly from each other, with medium and large ESs, respectively.

Studies on substance use disorders.

Five studies examined using technology to train clinicians in treating substance use disorders. Four studies were of poor quality, and one was of fair quality. Four studies examined use of traditional versus technology-based formats of the National Institute on Drug Abuse (NIDA) manual for training clinicians in CBT for substance use disorders (42). Sholomskas and colleagues (7) conducted a nonrandomized trial comparing three training methods. Participants in the manual condition (manual) spent 20 hours studying the NIDA manual. Participants in the manual plus Web condition (Web) had access to the NIDA manual and spent 20 hours working with an interactive online program based on the manual that included multiple-choice questions and virtual role-plays. Last, participants in the manual plus seminar and supervision condition (seminar+supervision) had access to the NIDA manual and attended a 3-day didactic seminar to review the manual. Additionally, participants in seminar+supervision practiced CBT skills with their patients over the next 3 months and submitted audiotaped CBT sessions for review by supervisors. Seminar+supervision group participants had significantly greater improvement than their counterparts on objective ratings of skills and adherence, whereas Web group participants had significantly greater improvement than participants in the manual group. ESs ranged from small to large.

Weingardt et al. (28) also compared three conditions for training in treatment of substance use disorders. Participants randomly assigned to the Web-based training (WBT) group completed a 60-minute online version of the Coping With Craving module from the NIDA manual. Those randomly assigned to the face-to-face (FTF) workshop group completed a 60-minute, expert-led workshop presenting the same content provided to the WBT group. Participants randomly assigned to the delayed-training control group watched an unrelated video for 60 minutes. At postassessment, participants in both active training groups had significantly higher knowledge scores than those in the delayed-training group, although the differences were small, with no significant differences between the active training groups.

Building on this literature, Weingardt and colleagues (29) conducted an RCT comparing two Web-based training models, each using an eight-module OLT course based on the NIDA manual. The training models varied in the adherence required and flexibility allowed. The high-fidelity group covered eight modules in a month, was instructor led and didactic, and was followed by structured group supervision. The low-fidelity group allowed participants to cover topics at random, was peer led and interactive, and provided supervision with a flexible agenda. There were no significant group differences in CBT knowledge scores at postassessment.

Larson and colleagues (30) examined the effectiveness of “Technology to Enhance Addiction Counselor Helping” (TEACH), an eight-module Web course based on the NIDA manual that is designed to increase clinicians’ use of CBT skills. Participants were randomly assigned to either TEACH-CBT or a manual-based training group and participated in monthly supervision phone calls. Participants in the manual-based group received the NIDA TM covering the same content as TEACH-CBT. No significant group differences were found in adequate adherence to CBT delivery at postassessment.

Finally, Sholomskas and Carroll (31) assessed the efficacy of two methods of training clinicians to implement the Twelve-Step Facilitation (TSF) manual, which approximates the approach of the 12 steps of Alcoholics Anonymous. Participants who were randomly assigned to the manual group received the TSF manual (43). Those randomly assigned to the CD-ROM+manual group received the manual and a seven-module computer program based on the manual that included role-plays, vignettes, and other interactive tasks to promote learning. Participants in manual groups were asked to spend 10 hours reading the manual while those in the CD-ROM+manual group were required to spend 10 hours working with the computer program for 3 weeks. CD-ROM+manual group participants saw significantly greater gains than manual group participants in their ability to demonstrate TSF skills at postassessment, with large ESs. Both groups reported a moderately high level of satisfaction with the manual and spent comparable time reading it. However, CD-ROM+manual group participants spent an average of 9.3 additional hours working with the computer program.

Studies on substance use and suicidality.

Three studies examined use of technology to train clinicians in treating substance abuse problems or suicidality with DBT. One study received a quality rating of fair, whereas two received poor ratings. Dimeff and colleagues (6) compared the efficacy of three methods of training clinicians to treat suicidal and substance-dependent clients with the DBT Skills Training Manual (44). Participants randomly assigned to the manual group received a copy of the DBT skills manual and a study guide. Participants randomly assigned to the OLT group were asked to use a five-module OLT program based on the manual for 20 hours. The modules covered mindfulness, distress tolerance, emotion regulation, interpersonal effectiveness skills, and skills coaching. Finally, participants randomly assigned to an instructor-led training (ILT) group attended a 2-day, expert-led workshop and were given the PowerPoint slides used during training. OLT participants reported significantly greater rates of change in knowledge than those in the ILT and manual groups at postassessment and follow-up, with small and medium ESs, respectively. OLT and ILT participants reported greater satisfaction with the learning objectives and with practical knowledge gained than those in the manual group. No significant group differences were found in adherence to the skills taught.

Dimeff and colleagues (32) evaluated the efficacy of three methods of training clinicians in DBT distress tolerance skills. Participants were randomly assigned to either a manual-alone condition (manual), in which they received the distress tolerance module of the DBT skills manual; an e-learning course or a CD-ROM that covered the same content as the manual (e-DBT); or a placebo e-learning course (“Care of the Client With Borderline Personality Disorder”) (e-control), a simulation of treatment for a client with borderline personality disorder in an inpatient setting. Manual and e-DBT participants had significantly higher knowledge scores than those in the e-control group at postassessment, which took place immediately after the respective training, and 15-week follow-up, with large ESs. The e-DBT group significantly outperformed the manual group at 15-week follow-up but not at postassessment, with a large ES. The manual and e-DBT conditions were also rated as significantly more acceptable than the e-control condition at postassessment and follow-up. Finally, participants in e-DBT spent significantly greater time with the course material than their counterparts.

Dimeff and colleagues (33) built upon that study by testing the efficacy of three methods of training clinicians in DBT chain analysis and validation strategies. Participants randomly assigned to OLT completed online courses in DBT chain analysis and DBT validation strategies for 8 and 4 hours, respectively. Participants randomly assigned to ILT attended a 2-day, 12-hour workshop. Those in the TM group received a 133-page manual covering DBT chain analysis and a 59-page manual on DBT validation strategies along with a study guide. OLT participants had significantly higher knowledge scores than their counterparts at postassessment and follow-up assessment, with large ESs. ILT participants found their training to be significantly more satisfactory than OLT and TM participants at postassessment, with medium and large ESs, respectively. It is noteworthy that a significantly higher percentage of participants dropped out of the OLT group (34%) than the ILT (5.5%) and TM (6.5%) groups.

Studies on other disorders.

The remaining seven studies trained clinicians in treating a variety of mental health problems, such as posttraumatic stress disorder (PTSD), bipolar disorder, and autism, as well as in general CBT skills. Five studies were rated as poor, one as fair, and one as good.

Ruzek and colleagues (34) tested the effectiveness of a three-module, Web-based program for training Veterans Health Administration clinicians in treating veterans with PTSD. The program incorporated elements from many CBT treatment protocols for PTSD and related disorders and focused on ME, goal setting, and behavior task assignment. Participants were randomly assigned to either Web-based training (Web), Web-based training plus consultation (Web+consult), or a no-training control group (control). Web+consult group participants received up to six weekly telephone-based, small-group consultation sessions, each lasting approximately 45 to 60 minutes. Compared with the control group, participants in the active training groups experienced significantly greater improvement in skills acquisition scores for the ME and behavioral task assignment modules at postassessment, with medium to large ESs. No significant group differences were found for the goal-setting module. Additionally, at postassessment, Web+consult group participants showed significantly greater skill acquisition than Web group participants on the ME module, with a medium ES.

Stein and colleagues (35) examined the effectiveness of a 12-hour online program to train clinicians in interpersonal and social rhythm therapy (IPSRT) for bipolar disorder. Participants were randomly assigned to either OLT (e-learning) with hour-long telephone supervision once a month or a 2-day, 12-hour, in-person workshop with weekly local supervision. Those in e-learning joined an implementation team, which participated in a learning collaborative focusing on quality improvement, implementation, and skills assessment. However, there were no significant group differences in the use of IPSRT techniques at any assessment point.

Cooper and colleagues (36) compared the effectiveness of two modes of Web-centered training in increasing clinicians’ competence in using enhanced CBT (CBT-E) (9, 45) for eating disorders. Web-centered CBT-E training consists of an 18-module online course that includes an expert description of how to implement CBT-E as well as handouts, learning exercises, video recordings of role-plays, and tests of knowledge with feedback. Participants were randomly assigned to either an independent training group, where they received the online course alone, or a supported training group, where they received the online course and up to 12 30-minute telephone calls from research assistants over the 20-week training period. Calls were designed to be supportive and encourage program completion. No significant group differences were found on measures of competence at postassessment or follow-up. There were also no significant group differences in training completion.

Granpeesheh and colleagues (37) evaluated an e-learning tool designed to train clinicians in academic knowledge of applied behavior analysis (ABA) treatment for children with autism. Participants randomly assigned to the e-learning group had access to a 10-hour, self-paced computer program that included topics ranging from an introduction to autism and ABA to antecedent-based and consequence-based interventions. They also attended a 2-hour discussion with an in-person trainer following program completion. Participants in the in-person training (standard) group received 16 hours of training over 2 days covering similar content through PowerPoint presentations, role-plays, and discussions. Standard group participants had significantly higher knowledge scores than the e-learning group at postassessment.

Rather than focusing on a specific disorder, Mullin and colleagues (38) trained clinicians in MI to help facilitate behavior change in their patients. A small group of clinicians chose to receive 22 hours of MI training through the “Certificate of Intensive Training in Motivational Interviewing” course. Spread over 3 to 5 months, the training was provided through an online or in-person workshop and was followed by 2 hours of individual MI practice and feedback. The course content for both workshops was grounded in the eight tasks of learning MI, as described by Miller and Moyers (46). No significant group differences in MI skills were found at postassessment.

The final two studies compared OLT for general CBT skills with other modes of training. Rakovshik and colleagues (39) randomly assigned fifth-year students from master’s-level clinical psychology programs in Russia to either a 3-hour Internet-based CBT training program (immediate) spread over a month or a DT control group (delayed), in which participants received access to the same training program after a 1-month wait. The immediate training provided instruction in CBT theory, assessment, and formulation and included videos of didactic lectures and role-plays and simultaneous display of associated PowerPoint presentations with Russian subtitles. The immediate training group scored significantly higher than the DT group on measures of CBT competence at postassessment, with a large ES. No significant group differences in satisfaction ratings emerged.

German and colleagues (40) compared expert-led training to Web-based training in the use of general CBT skills. A cohort of community mental health clinicians received in-person, expert-led (IPEL) training, which consisted of a 22-hour, in-person CBT workshop, followed by weekly, 2-hour group consultations with experts for 6 months. The consultations focused on applying CBT, including review of audio-recorded sessions. The next cohort participated in Web-based, trained-peer (WBTP) training. The Web-based training was based on the in-person core curriculum and added videotaped role-plays, on-screen activities, and quizzes to improve engagement. The Web-based training was followed by peer consultation with the initial cohort and regular consultations with an instructor. No significant group differences were found between the two cohorts in CBT competency at postassessment; however, participants in Web-based training were less likely than participants in in-person training to complete the course.

Studies on supported training.

Out of the 24 included studies, support with technology-based training was provided in 14 of the studies, which included supervision or engagement interventions. Supervision was aimed at promoting learning and use of therapy skills and generally included answering questions, review or discussion of training session content, and case feedback by experienced supervisors or clinicians. Nine studies examined the effect of supervision provided either individually or in small groups through face-to-face or technological modalities such as Twitter, telephone, and video calling platforms, such as Skype. The results were mixed, with some studies finding that supervision had no effect on primary outcomes (35, 37, 40). Other studies, however, reported improvements in CBT competence (27), skills acquisition (34), skills competence (7), and program completion rates (20) among those who received supervision compared with those who did not.

Additionally, five studies paired technology-based training with engagement interventions, which were mainly supportive in nature and were generally led by peers or research assistants. Although some studies did not find that the engagement interventions had a significant effect on primary outcomes (30, 36), others found some benefit. For example, Harned and colleagues (25) found that the addition of phone calls with a basis in brief MI significantly improved attitudes toward ET among clinicians. Notably, when ME was provided through a computerized intervention rather than individually, these results did not hold. Indeed, Harned and colleagues (26) found that clinical attitudes significantly improved only when ME was provided in conjunction with an LC.

Discussion

Effectively disseminating EBTs to the mental health workforce is a significant challenge in the field. This systematic review aimed to gain a better understanding of how technology has been used to train clinicians in EBTs by providing a comprehensive summary of the literature on how technology can aid in training clinicians. After a thorough literature search, we found 24 articles that met the inclusion criteria. These were subsequently categorized by the content area in which training was provided and were independently coded and assessed for risk of bias by two reviewers.

It is noteworthy that of the 24 studies reviewed, only one met criteria for good quality, which points to the limitations and challenges inherent in this field of research. Furthermore, it should be noted that the quality and interactivity of e-learning interventions vary widely, and shortcomings in these areas may have affected some of the individual study findings. As such, all interpretations should be made in the context of these limitations.

Clinicians were trained in some form of CBT in all of the studies reviewed, with the exception of studies by Sholomskas and Carroll (31), Stein and colleagues (35), and Mullin and colleagues (38), in which clinicians were trained in Twelve-Step Facilitation, IPSRT, and MI, respectively. Nineteen of the 24 studies used OLT, whereas five used computer software or CD-ROMs. Anxiety-related disorders were the focus of more studies than any other disorder (N=7), followed by substance use disorders (N=5). Nine studies also examined the addition of supervision, which included use of the Internet, social media, and video conferencing. Despite the proliferation of freely available mobile applications for smartphones, no app for training clinicians in EBTs was identified in our database search.

Ten studies compared technology-based training with technology-based training plus support or an attention control. Although important, these comparisons do not further our understanding of whether OLT is as effective as traditional training methods (i.e., in-person or manual-based training). Of the seven studies that compared technology-based training with in-person training, six found no significant difference between the modalities in gains in therapy knowledge and skills at postassessment.

Two studies compared OLT with manual-based training and concluded that participants in both conditions made similar gains in knowledge (32) and adherence scores at postassessment (30). Two studies examined the combination of technology and manual-based training, with one study finding no significant difference between OLT alone and OLT plus a TM (22). The other study found that a CD-ROM plus a TM was superior to a manual alone (7).

Finally, three studies compared OLT with both manual-based and in-person training. Arguably, such studies allow us to draw the most definitive conclusions regarding how technology-based training fares in comparison with traditional training. However, these studies made heterogeneous comparisons and had mixed results. For example, two studies found face-to-face training to be superior to manual-based training and OLT in improving participants’ scores on primary outcome measures (7, 33). One study found OLT to be more effective compared with the manual and face-to-face conditions (6). Replicability of such studies is of utmost importance to unequivocally infer the effectiveness of technology-based training.

With the exception of three studies, technology-based training was judged as or more effective than manual-based or in-person training. Across a majority of studies, participants receiving technology-based training improved their knowledge, skills, and competence and were more satisfied than comparison groups with their training. This result is consistent with previous systematic reviews that have found that Web-based training methods have a positive effect on training outcomes of mental health professionals (13, 47, 48).

This review also included studies examining the impact of supported training in the form of supervision or engagement interventions. Many studies compared technology-based training alone with technology-based training plus support (20, 22, 25, 27, 36). Others compared two types of in-person consultation (e.g., expert-led versus peer-led consultation) (40) or two forms of technology-based support (e.g., computerized ME and ME plus a Web-based LC) (26). Overall, findings regarding the utility of supervision and engagement interventions were mixed. This outcome may be due partly to the distinctive comparisons made, variation in dosage (e.g., 30 minutes versus 2 hours), frequency (e.g., weekly versus every 6 weeks), and duration (e.g., 12 weeks versus 20 weeks or 12 months) of support and differences in who provided the support (e.g., experts versus peers). Ongoing support may improve clinician knowledge and connection with peers and trainers (49). However, the heterogeneity in the included studies makes it difficult to draw clear conclusions on the effect of support for technology-based training.

Limitations of the Literature

Because only one study met criteria for a rating of good quality, the findings need to be interpreted in the context of the studies’ limitations, of which selection, information, and measurement bias were most notable. A majority of studies used convenience sampling to recruit participants and were conducted in the United States with predominantly white, female samples. Similarly, a disproportionate number had very small sample sizes and were statistically underpowered, which further limits our ability to discern meaningful group differences and draw definitive conclusions. Future research may mitigate such issues by using larger and more representative samples and employing systematic sampling techniques. Three studies did not use random assignment, and six failed to report the method of randomization. Three other studies were also found to utilize inadequate randomization methods, such as failing to use a randomly generated assignment.

Studies were also limited by their data collection approach. Most studies used participant self-reports to assess primary outcomes, such as knowledge and skill acquisition. Previous studies have found that clinicians tend to be rated as more skillful when the ratings are based on self-report rather than behavioral observations (5, 50). Although using behavioral observations can be expensive and time-consuming, such measures generate more objective and accurate results. Future studies may benefit from using objective strategies to assess outcomes, such as session recordings and role-plays assessed by blinded experts on reliable and valid scales. Studies also differed by intensity of training, including the number of training hours and treatment fidelity required across various conditions. This makes drawing conclusions from the observed results challenging because alternative explanations may be used to justify the findings.

Thirteen studies obtained satisfaction ratings from participants. Overall, most studies found that participants assigned to technology-based training groups were as satisfied or more satisfied with training compared with those assigned to manual-based or in-person training. However, only six studies reported on program completion rates, three of which found significantly lower completion rates among OLT groups (20, 33, 40). Program completion can have a significant impact on training outcomes, such as knowledge and skills acquisition. Future research should collect user experience data to ensure that programs are acceptable to participants, which will increase the likelihood of participant program completion.

Limitations of the Study

This systematic review had several limitations. First, given that the field has only recently begun to examine specific technology for training clinicians, there is a lack of consistency in identifying training methods. Therefore, some studies that did not match our search terms may have been unintentionally omitted. Second, our decision to include only studies with a comparison group may have restricted findings from the review. Third, although we calculated ESs to quantify the magnitude of between-group differences, we were unable to conduct a meta-analysis because information was missing in some of the included studies. Finally, only studies in the published literature were included in this review. This concern was addressed by contacting authors of included studies and inquiring about other related research. None of the authors contacted reported unpublished studies with null findings.

Despite its limitations, this systematic review provides a novel examination of how technology has been used to train clinicians. Although previous systematic reviews have examined Web-based training methods for clinicians, earlier efforts did not assess risk of bias and did not determine interrater reliability (13). Both methods allow for reduction of subjectivity and provide objective interpretations of the findings reported in this synthesis of studies.

Conclusions

Overall, our findings suggest that technology-based training is a promising avenue for training larger numbers of clinicians in EBTs. Providing face-to-face instruction can be expensive and time-consuming. Most of the technology-based training interventions identified in this review were self-paced, thereby affording clinicians more flexibility and independence. Finally, technology-based training can help disseminate information in a standardized manner so all trainees receive the same quality of instruction. Future research is needed to establish the long-term effects of technology-based training on clinician skills and knowledge as well as on patient outcomes. Finally, future research should conduct economic analyses to assess whether technology-based training is a cost-effective option for training clinicians.

Department of Psychology, Montclair State University, Montclair, New Jersey.
Send correspondence to Dr. Reyes-Portillo ().

The authors report no financial relationships with commercial interests.

References

1 Beidas RS, Edmunds JM, Marcus SC, et al.: Training and consultation to promote implementation of an empirically supported treatment: a randomized trial. Psychiatr Serv 2012; 63:660–665LinkGoogle Scholar

2 Kobak KA, Mundt JC, Kennard B: Integrating technology into cognitive behavior therapy for adolescent depression: a pilot study. Ann Gen Psychiatry 2015; 14:37Crossref, MedlineGoogle Scholar

3 Chu BC, Carpenter AL, Wyszynski CM, et al.: Scalable options for extended skill building following didactic training in cognitive-behavioral therapy for anxious youth: a pilot randomized trial. J Clin Child Adolesc Psychol 2017; 46:401–410Crossref, MedlineGoogle Scholar

4 Herschell AD, Kolko DJ, Baumann BL, et al.: The role of therapist training in the implementation of psychosocial treatments: a review and critique with recommendations. Clin Psychol Rev 2010; 30:448–466Crossref, MedlineGoogle Scholar

5 Miller WR, Yahne CE, Moyers TB, et al.: A randomized trial of methods to help clinicians learn motivational interviewing. J Consult Clin Psychol 2004; 72:1050–1062Crossref, MedlineGoogle Scholar

6 Dimeff LA, Koerner K, Woodcock EA, et al.: Which training method works best? A randomized controlled trial comparing three methods of training clinicians in dialectical behavior therapy skills. Behav Res Ther 2009; 47:921–930Crossref, MedlineGoogle Scholar

7 Sholomskas DE, Syracuse-Siewert G, Rounsaville BJ, et al.: We don’t train in vain: a dissemination trial of three strategies of training clinicians in cognitive-behavioral therapy. J Consult Clin Psychol 2005; 73:106–115Crossref, MedlineGoogle Scholar

8 Stewart RE, Stirman SW, Chambless DL: A qualitative investigation of practicing psychologists’ attitudes toward research-informed practice: implications for dissemination strategies. Prof Psychol Res Pr 2012; 43:100–109Crossref, MedlineGoogle Scholar

9 Fairburn CG, Allen E, Bailey-Straebler S, et al.: Scaling up psychological treatments: a countrywide test of the online training of therapists. J Med Internet Res 2017; 19:e214Crossref, MedlineGoogle Scholar

10 Fairburn CG, Cooper Z: Therapist competence, therapy quality, and therapist training. Behav Res Ther 2011; 49:373–378Crossref, MedlineGoogle Scholar

11 McMillen JC, Hawley KM, Proctor EK: Mental health clinicians’ participation in Web-based training for an evidence-supported intervention: signs of encouragement and trouble ahead. Adm Policy Ment Health Ment Health Serv Res 2016; 43:592–603Crossref, MedlineGoogle Scholar

12 Calder R, Ainscough T, Kimergård A, et al.: Online training for substance misuse workers: a systematic review. Drugs 2017; 24:430–442Google Scholar

13 Jackson CB, Quetsch LB, Brabson LA, et al.: Web-based training methods for behavioral health providers: a systematic review. Adm Policy Ment Health Ment Health Serv Res 2018; 45:587–610Crossref, MedlineGoogle Scholar

14 Cochrane Handbook for Systematic Reviews of Interventions, Version 5.1.0. Edited by Higgins J, Green S. London, Cochrane Collaboration, 2011Google Scholar

15 Moher D, Shamseer L, Clarke M, et al.: Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev 2015; 4:1Crossref, MedlineGoogle Scholar

16 Hedges LV, Olkin I: Statistical Methods for Meta-Analysis. Waltham, MA, Academic Press, 1985Google Scholar

17 Cohen J: Statistical Power Analysis for the Behavioral Sciences, 2nd ed. Hillsdale, NJ, Erlbaum, 1988Google Scholar

18 Quality Assessment Tool for Controlled Intervention Studies. Bethesda, MD, National Heart, Lung, and Blood Institute, 2014Google Scholar

19 Landis JR, Koch GG: The measurement of observer agreement for categorical data. Biometrics 1977; 33:159–174Crossref, MedlineGoogle Scholar

20 Bennett‐Levy J, Hawkins R, Perry H, et al.: Online cognitive behavioural therapy training for therapists: outcomes, acceptability, and impact of support. Aust Psychol 2012; 47:174–182CrossrefGoogle Scholar

21 Hubley S, Woodcock EA, Dimeff LA, et al.: Disseminating behavioural activation for depression via online training: preliminary steps. Behav Cogn Psychother 2015; 43:224–238Crossref, MedlineGoogle Scholar

22 Ehrenreich-May J, Dimeff LA, Woodcock EA, et al.: Enhancing online training in an evidence-based treatment for adolescent panic disorder: a randomized controlled trial. Evid Based Pract Child Adolesc Ment Health 2016; 1:241–258CrossrefGoogle Scholar

23 McDonough M, Marks IM: Teaching medical students exposure therapy for phobia/panic—randomized, controlled comparison of face-to-face tutorial in small groups vs solo computer instruction. Med Educ 2002; 36:412–417Crossref, MedlineGoogle Scholar

24 Gega L, Norman IJ, Marks IM: Computer-aided vs tutor-delivered teaching of exposure therapy for phobia/panic: randomized controlled trial with pre-registration nursing students. Int J Nurs Stud 2007; 44:397–405Crossref, MedlineGoogle Scholar

25 Harned MS, Dimeff LA, Woodcock EA, et al.: Overcoming barriers to disseminating exposure therapies for anxiety disorders: a pilot randomized controlled trial of training methods. J Anxiety Disord 2011; 25:155–163Crossref, MedlineGoogle Scholar

26 Harned MS, Dimeff LA, Woodcock EA, et al.: Exposing clinicians to exposure: a randomized controlled dissemination trial of exposure therapy for anxiety disorders. Behav Ther 2014; 45:731–744Crossref, MedlineGoogle Scholar

27 Rakovshik SG, McManus F, Vazquez-Montes M, et al.: Is supervision necessary? Examining the effects of Internet-based CBT training with and without supervision. J Consult Clin Psychol 2016; 84:191–199Crossref, MedlineGoogle Scholar

28 Weingardt KR, Villafranca SW, Levin C: Technology-based training in cognitive behavioral therapy for substance abuse counselors. Subst Abus 2006; 27:19–25Crossref, MedlineGoogle Scholar

29 Weingardt KR, Cucciare MA, Bellotti C, et al.: A randomized trial comparing two models of Web-based training in cognitive-behavioral therapy for substance abuse counselors. J Subst Abuse Treat 2009; 37:219–227Crossref, MedlineGoogle Scholar

30 Larson MJ, Amodeo M, Locastro JS, et al.: Randomized trial of Web-based training to promote counselor use of cognitive-behavioral therapy skills in client sessions. Subst Abus 2013; 34:179–187Crossref, MedlineGoogle Scholar

31 Sholomskas DE, Carroll KM: One small step for manuals: computer-assisted training in twelve-step facilitation. J Stud Alcohol 2006; 67:939–945Crossref, MedlineGoogle Scholar

32 Dimeff LA, Woodcock EA, Harned MS, et al.: Can dialectical behavior therapy be learned in highly structured learning environments? Results from a randomized controlled dissemination trial. Behav Ther 2011; 42:263–275Crossref, MedlineGoogle Scholar

33 Dimeff LA, Harned MS, Woodcock EA, et al.: Investigating bang for your training buck: a randomized controlled trial comparing three methods of training clinicians in two core strategies of dialectical behavior therapy. Behav Ther 2015; 46:283–295Crossref, MedlineGoogle Scholar

34 Ruzek JI, Rosen RC, Garvert DW, et al.: Online self-administered training of PTSD treatment providers in cognitive-behavioral intervention skills: results of a randomized controlled trial. J Trauma Stress 2014; 27:703–711Crossref, MedlineGoogle Scholar

35 Stein BD, Celedonia KL, Swartz HA, et al.: Implementing a Web-based intervention to train community clinicians in an evidence-based psychotherapy: a pilot study. Psychiatr Serv 2015; 66:988–991LinkGoogle Scholar

36 Cooper Z, Bailey-Straebler S, Morgan KE, et al.: Using the Internet to train therapists: randomized comparison of two scalable methods. J Med Internet Res 2017; 19:e355Crossref, MedlineGoogle Scholar

37 Granpeesheh D, Tarbox J, Dixon DR, et al.: Evaluation of an eLearning tool for training behavioral therapists in academic knowledge of applied behavior analysis. Res Autism Spectr Disord 2010; 4:11–17CrossrefGoogle Scholar

38 Mullin DJ, Saver B, Savageau JA, et al.: Evaluation of online and in-person motivational interviewing training for healthcare providers. Fam Syst Health 2016; 34:357–366Crossref, MedlineGoogle Scholar

39 Rakovshik SG, McManus F, Westbrook D, et al.: Randomized trial comparing Internet-based training in cognitive behavioural therapy theory, assessment and formulation to delayed-training control. Behav Res Ther 2013; 51:231–239Crossref, MedlineGoogle Scholar

40 German RE, Adler A, Frankel SA, et al.: Testing a Web-based, trained-peer model to build capacity for evidence-based practices in community mental health systems. Psychiatr Serv 2018; 69:286–292LinkGoogle Scholar

41 Pincus DB, Ehrenreich JT, Mattis SG: Mastery of Anxiety and Panic for Adolescents Riding the Wave, Therapist Guide. New York, Oxford University Press, 2008CrossrefGoogle Scholar

42 Carroll KM: A Cognitive-Behavioral Approach: Treating Cocaine Addiction. Rockville, MD, US Department of Health and Human Services, 1998Google Scholar

43 Nowinski J, Baker S, Carroll KM: Twelve-Step Facilitation Therapy Manual: A Clinical Research Guide for Therapists Treating Individuals With Alcohol Abuse and Dependence. Rockville, MD, National Institute on Alcohol Abuse and Alcoholism, 1999Google Scholar

44 Linehan M: Skills Training Manual for Treating Borderline Personality Disorder. New York, Guilford, 1993Google Scholar

45 Fairburn CG: Cognitive Behavior Therapy and Eating Disorders. New York, Guilford, 2008Google Scholar

46 Miller WR, Moyers TB: Eight stages in learning motivational interviewing. Subst Abus 2006; 5:3–17Google Scholar

47 Cook DA, Levinson AJ, Garside S, et al.: Internet-based learning in the health professions: a meta-analysis. JAMA 2008; 300:1181–1196Crossref, MedlineGoogle Scholar

48 Roh KH, Park H-A: A meta-analysis on the effectiveness of computer-based education in nursing. Healthc Inform Res 2010; 16:149–157Crossref, MedlineGoogle Scholar

49 Nadeem E, Gleacher A, Beidas RS: Consultation as an implementation strategy for evidence-based practices across multiple contexts: unpacking the black box. Adm Policy Ment Health Ment Health Serv Res 2013; 40:439–450Crossref, MedlineGoogle Scholar

50 Miller WR, Mount KA: A small study of training in motivational interviewing: does one workshop change clinician and client behavior? Behav Cogn Psychother 2001; 29:457–471CrossrefGoogle Scholar