The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
Published Online:https://doi.org/10.1176/appi.ps.20220140

Abstract

Objective:

The complex practice of measurement-based care (MBC) for mental health conditions has proven challenging to implement. This study aimed to evaluate an intensive strategy to implement MBC in U.S. Department of Veterans Affairs (VA) Primary Care Mental Health Integration clinics.

Methods:

Ten paired sites were randomly assigned to receive national MBC resources alone or with an intensive implementation strategy (external facilitation plus quality improvement teams) between May 2018 and June 2020. The intervention occurred over 12–18 months; two site pairs completed participation before the COVID-19 pandemic. Using the RE-AIM (Reach, Effectiveness, Adoption, Implementation, Maintenance) framework, the authors conducted qualitative interviews and used administrative data to evaluate the implementation, adoption, reach, and effectiveness of MBC.

Results:

All sites improved during the study, suggesting the effectiveness of the VA’s national MBC initiative. Sites with facilitation improved more than comparison sites in implementation, adoption, and reach of MBC. The effectiveness of MBC (i.e., clinician responsiveness to high patient-reported outcome measure [PROM] scores) was demonstrated at all sites both before and after facilitation. After the COVID-19 pandemic began, facilitation sites maintained or improved on their implementation gains, whereas comparison sites uniformly reported decreased emphasis on MBC.

Conclusions:

Implementation facilitation resulted in greater gains in outcomes of interest and helped sites retain focus on MBC implementation. Regardless of study condition, clinicians were responsive to elevated PROM scores, but MBC had a larger impact on care at facilitation sites because of increased uptake. Multiple technological and contextual challenges remain, but MBC holds promise for improving routine mental health care.

HIGHLIGHTS

  • The effectiveness of measurement-based care (MBC), assessed as clinician responsiveness to patient-reported outcome measure scores, was shown in a naturalistic setting.

  • Implementation facilitation resulted in increased adoption and reach of MBC, indicating that the impact of MBC was greater at sites with this implementation support.

  • Implementation facilitation helped sites maintain MBC implementation when they faced challenges due to the COVID-19 pandemic.

Measurement-based care (MBC) is a measurement-based approach to mental health care that is associated with improved treatment outcomes and guideline concordance (13) as well as increased patient communication and engagement in care (4, 5). Furthermore, MBC has been shown to improve the cost-effectiveness and efficiency of care in both pharmacotherapy (1, 3, 6) and psychotherapy (2, 7, 8). As a result, multiple organizations have promoted MBC as a worthy goal for implementation (7, 9, 10). Despite its many benefits and widespread enthusiasm for its implementation, MBC is underused (1114) and has been challenging to implement (9, 15). Multiple barriers to MBC have been documented, including lack of informatics tools, lack of training on its use, lack of incentives (such as performance metrics), providers’ perceptions of burden, insufficient time in session, and lack of alignment with patient goals (7, 11, 1319).

In addition to barriers to MBC implementation experienced across all mental health care settings, clinicians integrated into primary care use brief appointments to maintain availability of warm handoffs from primary care providers (20, 21) and therefore face heightened in-session time constraints. As a result, these constraints may make MBC especially challenging to conduct in integrated primary care settings, such as the U.S. Department of Veterans Affairs’ (VA’s) Primary Care Mental Health Integration (PCMHI) program. VA expects that all PCMHI programs will include both embedded mental health providers and collaborative care management (22), the latter being inherently measurement based (23, 24). However, although MBC was encouraged in PCMHI training events, no clear expectation for its use emerged until 2017, when VA began requiring national PCMHI competency training for all PCMHI staff (21). At the end of a training period, all providers and care managers must demonstrate their competency, including by explaining MBC and using results from patient-reported outcome measures (PROMs) with patients in a 30-minute appointment. For many PCMHI staff, participation in this training was the first time they had considered using PROMs with patients. For example, a 2015 study reported that only 23% of 8,000 PCMHI patient records showed evidence of use of a PROM (25); clear linkage of PROM use to treatment decisions was found in only 8.5% of charts reviewed (26). Thus, MBC remains a relatively new and potentially challenging practice for many PCMHI providers, and implementation support may be needed to ensure that barriers are overcome to achieve full implementation of this complex, novel practice.

To support the uptake of MBC in all mental health care settings, VA has rolled out a phased national MBC mental health initiative since 2015 (9). VA defines MBC as consisting of three essential components: collect, share, and act. Collect involves use of standardized self-report measures administered on a repeated basis to track treatment progress. Share involves sharing data with patients and other providers. Act is defined as using data to engage veterans in shared decision making to individualize goals, collaboratively develop treatment plans, assess progress over time, and adjust treatment as appropriate. Ending in 2017, the first phase consisted of identifying volunteer champion sites and supporting them through provision of a national dashboard, education materials, didactics, and minimal coaching. Champion sites were encouraged to implement MBC by using four PROMs—the Patient Health Questionnaire–9 (PHQ-9) (27), the General Anxiety Disorder–7 (GAD-7) instrument (28), the PTSD Checklist for DSM–5 (PCL-5) (29), and the Brief Addiction Monitor (BAM) (30)—at the start of and repeatedly throughout an episode of care. PROM results were expected to be recorded in the electronic medical record (EMR). All other details of MBC implementation were left to the sites’ discretion. The second phase of the initiative began in 2018 and focused on spreading MBC implementation resources through a community of practice. The third phase began in 2020 as the initiative formed field-based work groups for various clinical settings (e.g., substance use disorder and posttraumatic stress disorder treatments) that developed recommendations for national MBC policies specific to their setting. Although early results were promising (9), VA’s implementation efforts were further challenged by the onset of the COVID-19 pandemic. In response to the pandemic, most of the VA mental health workforce moved to provision of telephone- or video-based care by April 2020, requiring significant adaptations to all health care areas, including the novel and complex practice of MBC.

Implementation facilitation is an interactive process of problem solving and support (31) and has been widely utilized, especially in primary care settings, to address contextual challenges and promote successful implementation of complex evidence-based practices and programs (32, 33). Implementation facilitators help stakeholders identify and address implementation barriers and leverage site strengths to maximize the potential for successful implementation. We conducted a large project that sought to understand whether and how an intensive facilitation strategy can effectively support implementation of MBC in VA PCMHI care (34). In this study, we aimed to compare the effectiveness of facilitated support, consisting of an implementation facilitation strategy and resources from the national MBC initiative, with the effectiveness of receiving the national MBC initiative resources alone. We focused our evaluation of the facilitation strategy on implementation outcomes by using the RE-AIM (Reach, Effectiveness, Adoption, Implementation, Maintenance) framework (35, 36). This article focuses on the effects of implementation facilitation versus receiving national MBC initiative resources alone on four of the RE-AIM domains: implementation, adoption, reach, and effectiveness.

Methods

This pragmatic, randomized, multisite study was approved by the VA Central Institutional Review Board as a combination of research and quality improvement (QI). Paired sites were randomly assigned to receive facilitation support plus national MBC initiative resources or MBC initiative resources alone. Site-level activities to support implementation of MBC were considered QI, whereas centralized data collection activities measuring implementation outcomes constituted research. Site staff who participated in study interviews provided informed consent. A detailed description of all planned methods has been previously published (34).

Site Recruitment and Randomization

Ten sites were recruited. Potential sites were identified from the MBC initiative’s phase 1 champion sites as well as other sites with well-implemented PCMHI programs and interest in improving MBC implementation. Potential sites were grouped by key similarities, such as hospital and PCMHI program size. The first two sites within a group with all required permissions were paired and randomly assigned to receive facilitation or the comparison condition. As sites began the project, site leaders selected the specific PCMHI clinic to participate. As a result, facilitation sites consisted of three VA medical center (VAMC) clinics and two community-based outpatient clinics (CBOCs). Comparison sites consisted of four VAMC clinics and one CBOC. The average size of facilitation sites was 17,872 (range 15,663–22,705) unique patients in the previous fiscal year, and the average size of comparison sites was 15,339 patients (range 8,131–21,101). At recruitment, the mean percentage of PCMHI visits with PROMs was 27.7% for facilitation sites and 29.9% for comparison sites. Two site pairs completed all study participation before the COVID-19 pandemic. The remaining three pairs experienced the onset of the pandemic during the study intervention.

National MBC Initiative Resources

The study intervention took place between May 2018 and June 2020, so national MBC initiative resources from phases 1 and 2 (described above) were available to all sites. Comparison sites were free to use these resources, QI teams, or other implementation support strategies.

Implementation Facilitation Strategy

The implementation strategy consisted of facilitation from external MBC experts combined with a local QI team at each facilitation site. QI teams included PCMHI leadership, primary care and mental health care leadership (or their designees), and other stakeholders selected by PCMHI site leadership. Facilitation was provided by two highly experienced external facilitators (EFs), one per site, who had expertise in implementation science and PCMHI models of care. Three subject matter experts (SMEs) provided consultation on MBC to the EFs and QI teams as needed. Over 12–18 months, EFs and SMEs met regularly with QI teams and supported their efforts to increase MBC use. EFs provided a mean±SD of 65.7±11.3 hours per site, and SMEs provided 8.4±4.2 hours per site.

EFs applied the facilitation strategy, tailoring it to each site’s needs and resources across three phases: preparation, design, and implementation (Figure 1). During the preparation phase, EFs engaged PCMHI leadership, conducted needs assessments, and helped identify QI team members. The design phase started with a QI team meeting, during which the EF introduced the study, assessed knowledge of and perceptions about the evidence, and led initial planning discussions. Throughout this phase, EFs and SMEs focused on providing training to the QI team on MBC practice and systems change and on helping the QI team assess current MBC practice and develop a customized implementation plan. When this plan was completed, the site began the implementation phase, during which the QI team, with EF and SME support, educated and mentored clinicians and encouraged them to adopt MBC. Throughout this approximately 6-month phase, EFs and SMEs also helped the QI team monitor implementation progress, troubleshoot challenges, use national MBC initiative resources, and adjust local MBC practices to overcome problems and enhance the sustainability of MBC.

FIGURE 1.

FIGURE 1. Phases of measurement-based care facilitation and associated data collection activitiesa

aQI, quality improvement; T1, time 1 (administrative data were captured in the 6 months before the preparation phase); T2, time 2 (administrative data were captured for 6 months, beginning 4 months before the end of the implementation phase).

Evaluation

Using concurrent mixed methods to comprehensively describe the complexity of practice changes (37), we evaluated the implementation facilitation strategy with the RE-AIM (35, 36) domains of implementation (MBC processes that the sites implemented or changed), adoption (uptake of MBC by PCMHI providers), reach (proportion of PCMHI veterans who received MBC), and effectiveness (effect of MBC on care). Although we based our definition of MBC practice on the collect, share, and act components of the VA’s MBC initiative, our definitions of the implementation of these components were more specific. Table 1 provides definitions of the RE-AIM domains and evaluation measures and describes data source specifications for this study. Data collection was organized around the phases of implementation described above (Figure 1). Within each site pair, the timing of data collection at both sites was based on the facilitation site’s phase of facilitation.

TABLE 1. Evaluation measures and definitions based on RE-AIM domainsa

RE-AIM domainData source and study measuresDefinition and specification
Implementation (fidelity of measurement-based care [MBC] components as implemented in routine care)
 CollectQualitative interviews: patient-reported outcome measures (PROMs) are administered, collected, and entered into the medical record.Any activity related to administration and entry of PROMs into the electronic medical record in a manner that would allow graphing and sharing with other team members; entry of data into note text was not sufficient.
 ShareQualitative interviews: description of how providers share data with the patient, including talking about the results in relation to the patient’s life experience and application of results to the care planDiscussion of the rationale for MBC, PROM outcomes and change, or lack of change, with patients or with other care team members.
 Act, in patient careQualitative interviews: description of how providers use individual patient data to determine treatment options (referrals to higher level of care, treatment in PCMHI or primary care, or self-management) or to adjust ongoing treatmentUse of data to determine treatment types (pharmacotherapy vs. psychotherapy or other treatment option) or treatment levels (brief treatment in PCMHI vs. referral to more intensive treatment options such as general mental health outpatient treatment, specialty mental health care, or inpatient care) or to alter ongoing care in PCMHI or primary care; act could also include a shared decision with the veteran not to engage in active treatment, such as in a watchful waiting approach, or to terminate treatment in response to low or improved PROM scores.
 Act, at program levelQualitative interviews: description of how aggregate PROM data are used for program evaluation, determinations of treatments that should be offered by a clinic or program, or other program-level considerations; includes sharing aggregate data with leaders and other stakeholdersAlthough not required by the VA national MBC initiative, “act” was also defined at the program level as including use of aggregate data for quality improvement activities, for demonstrating programmatic effectiveness to others, or for marketing.
Adoption (absolute number or proportion of staff using MBC)Administrative data: within the population of PCMHI providers who were present at T1 and T2, the number who increased their rate of visits with PROM administration. Qualitative interviews: description of MBC programmatic expectations and uptake across PCMHI providers in a programProviders with on average ≥1 PCMHI visit per day were included. For providers not practicing in the PCMHI program for the entire period, the number of visits were counted between their first and last visit within the period; for each provider, the proportion of PCMHI visits during which a PROM was recorded on the same day was calculated; for providers who appeared in both baseline and implementation periods, the proportion of providers with increased rates of visits with PROM administration was calculated for each site.
Reach (absolute number or proportion of targeted patients receiving MBC)Administrative data: percentage of patients seen in PCMHI with at least two visits with PROMsCohort included patients with ≥2 PCMHI visits during the 6-month observation period, with ≥1 visit during the past 30 days. PROMs included PHQ-9, GAD-7, PCL-5, and ISI.
Effectiveness (clinical impact of MBC as implemented in routine care settings)Administrative data: the association of high, low, or no PHQ-9 score with changes in antidepressant medication or referrals to specialty mental health care during the evaluation periodPatients were included if they had ≥2 PCMHI visits, at least one of which was associated with PROM administration, and a depression diagnosis during the observation period; PHQ-9 scores ≥11 or a positive screen for suicidal ideation was categorized as high and scores <11 and no suicidal ideation as low; changes in medication included start, stop, or dose change within 7 days of a PCMHI visit during the observation period; specialty mental health care referrals were identified as those occurring within 30 days of a PCMHI visit without a mental health specialty clinic visit in the previous 6 months.

aGAD-7, Generalized Anxiety Disorder–7; ISI, Insomnia Severity Index; PCL-5, PTSD Checklist for DSM–5; PCMHI, U.S. Department of Veterans Affairs (VA) Primary Care Mental Health Integration program; PHQ-9, Patient Health Questionnaire–9; RE-AIM, Reach, Effectiveness, Adoption, Implementation, Maintenance; T1, time 1 (data were captured for the 6 months before the preparation phase); T2, time 2 (data were captured for 6 months, beginning 4 months before the end of the implementation phase).

TABLE 1. Evaluation measures and definitions based on RE-AIM domainsa

Enlarge table

Qualitative Data Collection and Analysis

Qualitative interviews (N=14) with PCMHI leaders for each site (as well as MBC champions at two sites) were designed to capture detailed descriptions of MBC practices in each of the collect, share, and act components (i.e., the RE-AIM domain of implementation) and perceptions about the adoption of those practices (i.e., adoption domain). Time 1 (T1) interviews were conducted during the preparation phase, before initiation of QI team activities. Time 2 (T2) interviews were conducted after the completion of the implementation phase. Interviewees were asked to describe their own views on MBC and MBC practice across their teams and, at T2, to describe changes that had occurred in MBC implementation. All interviews were approximately 1 hour long and were recorded.

We conducted a directed content analysis (38) of verbatim transcripts of these interviews. The analysis team consisted of two experienced qualitative researchers (L.O.W., M.J.R.) and a trained research assistant. First, we developed a priori codes on the basis of the interview guide. The team met regularly to discuss and resolve differences in application of codes and to refine the codes and definitions. We extracted coded material into templated documents organized by codes and subcodes and then summarized them for each site. We entered code summaries for each data collection period into a composite matrix, created site summaries, and compared pre- and postimplementation summaries to examine changes in MBC practice over the study period. We then aggregated our composite matrix into facilitation and comparison sites and further compared practices and changes in practice on the basis of sites’ assigned study conditions.

Clinical Administrative Data Collection and Analysis

Administrative data were captured retrospectively from the VA EMR stored in the VA Corporate Data Warehouse (CDW) (39). For each analysis, study cohorts were designed to include patients or providers who were actively engaged in PCMHI care during each observation period. CDW data captured the visit type (PCMHI or specialty mental health care), diagnoses, and provider for each mental health visit; all PROM administrations (type, score, and date); and prescription data (Table 1).

To determine which PROMs would be included in the study, we identified instruments that were administered on the same day as a PCMHI visit and were brief, repeatable measures that were used in at least 2% of all PCMHI visits. The resultant measures included the PHQ-9, GAD-7, PCL-5, and the Insomnia Severity Index (40). Although use of the BAM is encouraged by the MBC initiative, it was not being used at sufficient frequency for inclusion.

CDW data were captured in two 6-month data pulls. The T1 data pull captured data for the 6 months before the start of the preparation phase. The T2 data pull captured 6 months of data, beginning 4 months before the end of the implementation phase.

Descriptive statistics were used to evaluate changes in site-level reach and adoption. For reach, paired t tests were applied to compare facilitation sites and comparison sites in terms of the proportion of patients with two or more PCMHI visits in which PROMs were used. For MBC effectiveness, we modeled the odds of change in depression treatment by using a generalized mixed-model regression, applying a logit link function, and nesting study site within study pair. Treatment condition, time, and the interaction between treatment condition and time were included as predictors in the model. PHQ-9 administration and PHQ-9 severity history were tested as moderators and main effects. Regression analyses were performed with PROC GLIMMIX (SAS, version 8.2).

Results

Implementation: Qualitative Findings

Implementation findings are described by MBC component and are summarized in Table 2.

TABLE 2. Summary of qualitative findings regarding MBC implementation and adoption over time, by study condition

Facilitation sitesComparison sites
MeasureT1 (N=5)T2 (N=5)T1 (N=5)T2 (N=5)
Implementation
Collect
 Paper and pencil4354
 Tablet computers1201
 Frequency expectationa
  No specification1143
  Some specification3202
  Every visit1210
  Continued expectations during COVID-19b31
Share
 Unknown or variable5152
 Increased sharing43
 Increased discussion of sharing at case conferences50
Act
 Use MBC with patients55c55
 Use of algorithms0100
 Programmatic use of aggregate PROM data2d4e1f1f
 Programmatic use continued in COVID-19b31
Adoption
MBC is standard of careg4512
Enthusiasm for MBC variesh2133
MBC importance to providers increased32
Decreased importance during COVID-19b03

aNo specification: use of measurement-based care (MBC) was encouraged but not required or was expected but without any specific frequency expectations. Some specification: expectation to collect patient-reported outcome measures (PROMs) at some frequency fewer than at every visit (i.e., expectations included collection at first appointment only, first and last appointment and at least once midtreatment, first and every other follow-up appointment, or 50% of all visits).

bSix of 10 sites participated in the project during the COVID-19 pandemic (facilitation sites, N=3; comparison sites, N=3).

cAlthough all sites continued to report use of MBC with patients, one site reported increased use, and one site reported increased discipline using PROMs.

dOne site reviewed aggregate data to offer additional group therapy, and one site used a dashboard to discuss frequency of PROM collection at meetings.

eAll four sites used dashboards to track administration frequency and give feedback to clinicians. Moreover, one site continued to use data for group therapy recruitment, and one site reviewed referrals by provider to present targeted education regarding the range of services available in their Primary Care Mental Health Integration program.

fThe site used dashboard data to evaluate the frequency of PROM administration.

gMBC was strongly encouraged at all sites.

hGenerally high levels of enthusiasm were reported by all site leaders at both T1 (time 1; interviews were conducted during the preparation phase) and T2 (time 2; interviews were conducted immediately after the implementation phase).

TABLE 2. Summary of qualitative findings regarding MBC implementation and adoption over time, by study condition

Enlarge table

Collect.

At T1, at all sites except for one facilitation site that used tablet computers, the predominant mode of PROM administration was on paper. Paper administration meant that, most often, entry of scores into the EMR required clinicians to perform a separate step beyond documenting progress notes. Strategies for accomplishing this data entry varied among clinicians, regardless of the site’s assigned study condition. By T2, one additional facilitation site and one comparison site had successfully implemented collection through tablet computers that entered data automatically. With the COVID-19–related transition to virtual care, all sites found workarounds for administration; typically, either clinicians read PROM questions to patients or used fillable pdf files.

At T1, expectations for the frequency of PROM administration varied widely across sites, irrespective of study condition. One facilitation site and one comparison site expected PROM administration at every visit. By T2, one additional facilitation site had increased the expectation to collect at every visit, and the other facilitation sites did not change. One comparison site increased its expectation to collect at 50% of visits, one decreased its expectation, and the others remained unchanged. At all three facilitation sites that were active during the COVID-19 pandemic, expectations did not change when clinicians moved to virtual care. In contrast, two of three active comparison sites expressly decreased their reported emphasis on MBC, including data collection.

Share.

At T1, PCMHI leaders either did not know how often clinicians were sharing data with patients or reported significant variability across clinicians. By T2, most sites, regardless of study condition, reported that sharing, or clinician motivation to share, had increased. All facilitation sites and no comparison sites reported increased discussion at team meetings about sharing data with patients.

Act.

The variety of ways in which data were used in patient care were similar across sites and did not appear to vary by site study condition or over time. Clinicians used data to inform treatment decisions, determine level of care (i.e., PCMHI vs. specialty referral), and track progress. Data were also used as a springboard for patient discussions about lack of progress, ambivalence about treatment, or termination of therapy. PCMHI leaders also occasionally highlighted the importance of MBC in helping clinicians keep their sessions focused and improve the efficiency of care. At T1, no sites reported use of algorithms to guide care, and by T2, only one facilitation site had created an algorithm for depression care.

Although a national PROM dashboard and care management software capabilities were available, use of aggregate PROMs for program evaluation, QI, or marketing was limited at T1 across all sites. One facilitation site was reviewing data to identify patients for recruitment to group treatment, and one facilitation site was using dashboard data to discuss the frequency of PROM collection at team meetings. By T2, national dashboards were being used by four facilitation sites, but only one comparison site, to track the frequency of PROM administration and to give feedback to clinicians. Notably, after the onset of COVID-19, PCMHI leaders at two of three comparison sites commented that use of programmatic data had stopped, whereas none of the three facilitation sites changed their use of such data.

Adoption: Quantitative and Qualitative Findings

Quantitative data indicated that overall, from T1 to T2, the total number of providers who increased their rates of PROM administration was greater at facilitation (N=15) than comparison (N=10) sites (Figure 2). Because of the small sample, no further quantitative analysis was possible. However, four of the five facilitation sites showed some increased adoption, whereas three of the five comparison sites showed an increase. Of the six sites still engaged in the study during the pandemic, two facilitation sites and one comparison site increased adoption. Qualitative findings (Table 2) indicated generally high enthusiasm for MBC across sites and showed that more facilitation sites than comparison sites perceived adoption to be high at T1. Notably, the importance of MBC to providers remained unchanged after the onset of the COVID-19 pandemic at all facilitation sites, whereas its importance decreased at all comparison sites that actively participated in the study during the pandemic.

FIGURE 2.

FIGURE 2. Providers with or without increased rates of PROM administration at time 2, by study sitea

aPre–COVID-19 sites completed the study before the onset of the COVID-19 pandemic. COVID-19 sites experienced the onset of the pandemic during the study intervention. Data were captured for 6 months, beginning 4 months before the end of the implementation phase and compared with data captured in the 6 months before the preparation phase for the same providers. C, comparison site; F, facilitation site; PROM, patient-reported outcome measure.

Reach: Quantitative Findings

The change in the number of veterans with at least two visits during which PROMs were collected was significantly greater at facilitation than comparison sites (change in number=24.8, standard error=4.4, 95% CI=12.7–35.9, t=5.69, df=4, p=0.005). Because the sample was small, site-level quantitative analysis was not possible. However, as seen in Figure 3, from T1 to T2, three facilitation sites maintained (defined as being within ±10 percentage points) and two facilitation sites increased the proportion of patients with at least two visits with PROMs collected. Over the same period, two comparison sites maintained and three sites decreased the reach of MBC. At the six sites that were still active after the onset of the pandemic, one facilitation site increased and two maintained reach of MBC, whereas one comparison site maintained and two decreased the reach of MBC to the PCMHI population.

FIGURE 3.

FIGURE 3. Percentage of patients with at least two PCMHI visits with PROMs collected, by study sitea

aPre–COVID-19 sites completed the study before the onset of the COVID-19 pandemic. COVID-19 sites experienced the onset of the pandemic during the study intervention. C, comparison site; F, facilitation site; PCMHI, U.S. Department of Veterans Affairs Primary Care Mental Health Integration program; PROM, patient-reported outcome measure; T1, time 1 (data were captured in the 6 months before the preparation phase); T2, time 2 (data were captured for 6 months, beginning 4 months before the end of the implementation phase).

Effectiveness: Quantitative Findings

Antidepressant change.

In the T1 data, 104 (35%) of 301 and 205 (52%) of 397 patients who had a PCMHI visit with a PHQ-9 score had a change to their prescription of antidepressant medication at facilitation and comparison sites, respectively. In the T2 data, 81 (32%) of 254 and 175 (41%) of 429 such antidepressant changes occurred at facilitation and comparison sites, respectively. We observed a significant main effect of PHQ-9 score severity (F=20.05, df=1 and 9, p=0.002), such that the odds of experiencing a change in antidepressant prescription increased by a factor of 1.9 (95% CI=1.4–2.6) when a patient received at least one high PHQ-9 score (≥11) or expressed any suicidal ideation, compared with patients having PHQ-9 scores <11. No significant interaction effects were found for study condition, and no main effects were found for study condition or time.

Specialty mental health referral.

In the T1 data, 54 (17%) of 311 and 40 (9%) of 432 patients who had a PCMHI visit with a PHQ-9 score had a referral to specialty care at facilitation and comparison sites, respectively. In the T2 data, 53 (20%) of 263 and 47 (10%) of 453 such referrals were made to specialty care at facilitation and comparison sites, respectively. As with the medication changes, we also noted a significant effect of PHQ-9 severity (F=8.4, df=1 and 9, p=0.02), such that the odds of referral to more intensive care increased by a factor of 1.8 (95% CI=1.1–3.0) when a patient received at least one high PHQ-9 score (≥11) or expressed any suicidal ideation, compared with patients having PHQ-9 scores <11. We found no significant interaction effects for study condition and no main effects for study condition or time.

Integration of Qualitative and Quantitative Results

Overall, the qualitative data examining MBC implementation revealed more frequent improvements in the collect and share components of MBC at facilitation sites. These findings were underscored by quantitative data, which showed that more providers increased their use of MBC (adoption) and a significantly greater number of veterans received MBC in facilitation than in comparison sites (reach). Qualitative (implementation) and quantitative (effectiveness) findings were also aligned for the act component; when data collection occurred, clinicians used the collected data in similar ways across the study conditions and without significant changes over time.

Discussion

Taken together, our findings provide a snapshot of the highly variable successes and challenges experienced by 10 VA sites as they attempted to implement MBC with or without facilitation. Although they showed variability, all sites improved MBC implementation. Overall, our findings suggest that facilitation helped sites to implement or improve aspects of MBC (implementation), and associated improvements were seen in adoption by providers and reach to more patients. The implementation intervention had no direct effect on the effectiveness of MBC. However, when PROMs were collected, clinicians responded to elevated PROM scores (indicating significant symptoms of depression and the need for initiation of, or change in, treatment). Because significantly more PROMs were collected at facilitation sites than at comparison sites, the impact of MBC on the quality of depression care was greater at those sites.

Our findings replicate and extend what other studies have reported. Consistent with the literature from within and outside VA, we found numerous challenges in implementing MBC in mental health care (7, 9, 4143). In VA, an increased use of PROMs has been associated with VA’s national MBC initiative (9). We observed increases in use of PROMs at some sites that used national MBC initiative resources alone, but gains were significantly greater at sites that received facilitation support. Throughout the literature, definitions of MBC have been inconsistent, and implementation studies have infrequently measured clinicians’ sharing of data with patients (9, 43, 44). Our project advances the field by considering more aspects of MBC, sharing, and use of data. Through a qualitative approach, we could capture PCMHI leaders’ perceptions of how the clinicians on their teams discussed MBC with patients. Finally, the effectiveness of MBC has rarely been demonstrated in naturalistic settings. Importantly, the findings of our study—obtained outside of randomized controlled trials—show that clinicians are responsive to PROM scores.

Our mixed-methods study was well equipped for the naturalistic experiment that occurred when six of our 10 sites participated during the COVID-19 pandemic between baseline and the implementation observation periods. Qualitative findings showed that facilitation sites overwhelmingly “stayed the course” of MBC implementation despite unprecedented changes to care delivery. As a result, adoption and reach at facilitation sites grew or were maintained, despite COVID-19–related challenges, whereas the opposite was seen at comparison sites. These results help us understand the importance of implementation facilitation for sustaining focus on a practice innovation despite a massive upheaval in care provision. Nearly every large practice change effort is challenged by competing initiatives, although rarely so drastic. Thus, this finding is widely applicable.

Despite leaders and clinicians’ strong desire to implement MBC, multiple barriers remain and are applicable to all settings. Time and simple technological tools to collect and enter PROMs were significant challenges across sites. Even sites that had successfully implemented tablet-based collection were affected after care shifted to the virtual environment. Technological solutions that automate data entry and are independent of the physical presence of patients, such as the text- and e-mail–based PROM collection recently implemented in VA, are essential. Community settings require similar tools but may struggle in the absence of integrated medical record systems. A second lingering challenge that applies to both VA and the community is the essentially private nature of sharing data with patients within the confines of therapy sessions. In VA, providers are trained to share PROMs with patients, yet measurement of sharing activity remains challenging. PCMHI leaders at facilitation sites tried to overcome this barrier by modeling and discussing sharing at team meetings, an approach that would be applicable across settings.

Study limitations included our use of PCMHI leaders as informants to gather the perspectives of all PCMHI providers on their team; these reports may have been biased by their own, typically very favorable, views of MBC. However, this methodological choice limited burdens on busy clinicians. Generalizability of our findings may be limited by our use of sites that could be considered relatively early adopters of MBC; implementation of MBC is likely to be more challenging at less motivated sites. Our quantitative data were limited by the nature of clinical administrative data, especially at most of the sites where paper-and-pencil administration may have resulted in extra challenges to entering PROM results into the EMR. Our findings likely underestimated PROM collection at those sites. Our quantitative analyses for reach and adoption were limited by the small sample, which did not allow study of the sizable variations in team structure and roles. Finally, sustainment of MBC implementation was not addressed in this article.

Conclusions

External facilitation was associated with increased implementation of MBC and helped sites overcome even the extraordinary challenges of the COVID-19 pandemic. The results of our naturalistic study also indicate that MBC is effective even outside of randomized controlled trials. Although continued systemwide and more intensive support remains essential to overcome barriers and provide new technological solutions, once MBC is implemented, it is well positioned to improve mental health care.

Office of Mental Health and Suicide Prevention, U.S. Department of Veterans Affairs (VA) Center for Integrated Healthcare, Washington, D.C. (Wray, Tauriello); Division of Geriatrics and Palliative Care, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, Buffalo (Wray); Veterans Integrated Service Network (VISN) 4 Mental Illness Research, Education, and Clinical Center (MIRECC), Corporal Michael J. Crescenz VA Medical Center, Philadelphia (Oslin, Leong); Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia (Oslin); Tigermed-BDM, Somerset, New Jersey (Leong); VA Behavioral Health Quality Enhancement Research Initiative (QUERI), Central Arkansas Veterans Healthcare System, North Little Rock (Pitcock, Drummond, Ritchie); Department of Psychology, College of Arts and Sciences, University at Buffalo, Buffalo (Tauriello); Department of Psychiatry, University of Arkansas for Medical Sciences, Little Rock (Drummond, Ritchie).
Send correspondence to Dr. Wray ().

This study was supported by a grant from the VA QUERI (QUE 15-289), the VA Center for Integrated Healthcare, and the VISN 4 MIRECC.

Dr. Leong has performed contractual work for AbbVie Pharmaceuticals. The other authors report no financial relationships with commercial interests.

The funding bodies had no involvement in the design of the study; the collection, analysis, and interpretation of data; or in writing the manuscript. The views expressed in this article are those of the authors and do not represent the views of the VA or the U.S. government.

References

1. Guo T, Xiang YT, Xiao L, et al.: Measurement-based care versus standard care for major depression: a randomized controlled trial with blind raters. Am J Psychiatry 2015; 172:1004–1013LinkGoogle Scholar

2. Lambert MJ, Whipple JL, Hawkins EJ, et al.: Is it time for clinicians to routinely track patient outcome? A meta-analysis. Sci Pract 2003; 10:288–301 CrossrefGoogle Scholar

3. Trivedi MH, Daly EJ: Measurement-based care for refractory depression: a clinical decision support model for clinical research and practice. Drug Alcohol Depend 2007; 88:S61–S71Crossref, MedlineGoogle Scholar

4. Eisen SV, Dickey B, Sederer LI: A self-report symptom and problem rating scale to increase inpatients’ involvement in treatment. Psychiatr Serv 2000; 51:349–353LinkGoogle Scholar

5. Carlier IVE, Meuldijk D, Van Vliet IM, et al.: Routine outcome monitoring and feedback on physical or mental health status: evidence and theory. J Eval Clin Pract 2012; 18:104–110Crossref, MedlineGoogle Scholar

6. Adli M, Rush AJ, Moller HJ, et al.: Algorithms for optimizing the treatment of depression: making the right decision at the right time. Pharmacopsychiatry 2003; 36:222–229Crossref, MedlineGoogle Scholar

7. Fortney JC, Unützer J, Wrenn G, et al.: A tipping point for measurement-based care. Psychiatr Serv 2017; 68:179–188LinkGoogle Scholar

8. Kendrick T, El-Gohary M, Stuart B, et al.: Routine use of patient reported outcome measures (PROMs) for improving treatment of common mental health disorders in adults. Cochrane Database Syst Rev 2016; 7:CD011119MedlineGoogle Scholar

9. Resnick SG, Hoff RA: Observations from the national implementation of Measurement Based Care in Mental Health in the Department of Veterans Affairs. Psychol Serv 2020; 17:238–246Crossref, MedlineGoogle Scholar

10. Revised Outcome Measures Standard for Behavioral Health Care (Standard CTS 03.01.09). Washington, DC, The Joint Commission, 2018 Google Scholar

11. Garland AF, Kruse M, Aarons GA: Clinicians and outcome measurement: what’s the use? J Behav Health Serv Res 2003; 30:393–405Crossref, MedlineGoogle Scholar

12. Gilbody SM, House AO, Sheldon TA: Psychiatrists in the UK do not use outcomes measures. National survey. Br J Psychiatry 2002; 180:101–103Crossref, MedlineGoogle Scholar

13. Jensen-Doss A, Haimes EMB, Smith AM, et al.: Monitoring treatment progress and providing feedback is viewed favorably but rarely used in practice. Adm Policy Ment Health 2018; 45:48–61Crossref, MedlineGoogle Scholar

14. Lewis CC, Boyd M, Puspitasari A, et al.: Implementing measurement-based care in behavioral health: a review. JAMA Psychiatry 2019; 76:324–335Crossref, MedlineGoogle Scholar

15. Mellor-Clark J, Cross S, Macdonald J, et al.: Leading horses to water: lessons from a decade of helping psychological therapy services use routine outcome measurement to improve practice. Adm Policy Ment Health 2016; 43:279–285Crossref, MedlineGoogle Scholar

16. Jensen-Doss A, Hawley KM: Understanding barriers to evidence-based assessment: clinician attitudes toward standardized assessment tools. J Clin Child Adolesc Psychol 2010; 39:885–896Crossref, MedlineGoogle Scholar

17. Oslin DW, Hoff R, Mignogna J, et al.: Provider attitudes and experience with measurement-based mental health care in the VA implementation project. Psychiatr Serv 2019; 70:135–138LinkGoogle Scholar

18. Hatfield DR, Ogles BM: Why some clinicians use outcome measures and others do not. Adm Policy Ment Health 2007; 34:283–291Crossref, MedlineGoogle Scholar

19. Bickman L: A measurement feedback system (MFS) is necessary to improve mental health outcomes. J Am Acad Child Adolesc Psychiatry 2008; 47:1114–1119Crossref, MedlineGoogle Scholar

20. Dollar KM, Kearney LK, Pomerantz AS, et al.: Achieving same-day access in integrated primary care. Fam Syst Health 2018; 36:32–44Crossref, MedlineGoogle Scholar

21. Kearney LK, Dollar KM, Beehler GP, et al.: Creation and implementation of a national interprofessional integrated primary care competency training program: preliminary findings and lessons learned. Train Educ Prof Psychol 2019; 14:219–227 Google Scholar

22. Kearney LK, Post EP, Pomerantz AS, et al.: Applying the interprofessional patient aligned care team in the Department of Veterans Affairs: transforming primary care. Am Psychol 2014; 69:399–408Crossref, MedlineGoogle Scholar

23. Rubenstein LV, Chaney EF, Ober S, et al.: Using evidence-based quality improvement methods for translating depression collaborative care research into practice. Fam Syst Health 2010; 28:91–113Crossref, MedlineGoogle Scholar

24. Oslin DW, Sayers S, Ross J, et al.: Disease management for depression and at-risk drinking via telephone in an older population of veterans. Psychosom Med 2003; 65:931–937Crossref, MedlineGoogle Scholar

25. Beehler GP, King PR, Vair CL, et al.: Measurement of common mental health conditions in VHA co-located, collaborative care. J Clin Psychol Med Settings 2016; 23:378–388Crossref, MedlineGoogle Scholar

26. King PR, Beehler GP, Vair CL, et al.: Identifying measurement-based care practices of VHA co-located collaborative care providers. Prof Psychol Res Pract 2017; 48:236 CrossrefGoogle Scholar

27. Kroenke K, Spitzer RL, Williams JB: The PHQ-9: validity of a brief depression severity measure. J Gen Intern Med 2001; 16:606–613Crossref, MedlineGoogle Scholar

28. Spitzer RL, Kroenke K, Williams JBW, et al.: A brief measure for assessing generalized anxiety disorder: the GAD-7. Arch Intern Med 2006; 166:1092–1097Crossref, MedlineGoogle Scholar

29. PTSD: National Center for PTSD. Washington, DC, US Department of Veterans Affairs, 2013. https://www.ptsd.va.gov. Accessed Nov 29, 2022Google Scholar

30. Cacciola JS, Alterman AI, Dephilippis D, et al.: Development and initial evaluation of the Brief Addiction Monitor (BAM). J Subst Abuse Treat 2013; 44:256–263Crossref, MedlineGoogle Scholar

31. Stetler CB, Legro MW, Rycroft-Malone J, et al.: Role of “external facilitation” in implementation of research findings: a qualitative evaluation of facilitation experiences in the Veterans Health Administration. Implement Sci 2006; 1:23Crossref, MedlineGoogle Scholar

32. Kirchner JE, Ritchie MJ, Pitcock JA, et al.: Outcomes of a partnered facilitation strategy to implement primary care-mental health. J Gen Intern Med 2014; 29:904–912Crossref, MedlineGoogle Scholar

33. Wang A, Pollack T, Kadziel LA, et al.: Impact of practice facilitation in primary care on chronic disease care processes and outcomes: a systematic review. J Gen Intern Med 2018; 33:1968–1977Crossref, MedlineGoogle Scholar

34. Wray LO, Ritchie MJ, Oslin DW, et al.: Enhancing implementation of measurement-based mental health care in primary care: a mixed-methods randomized effectiveness evaluation of implementation facilitation. BMC Health Serv Res 2018; 18:753Crossref, MedlineGoogle Scholar

35. Glasgow RE, Vogt TM, Boles SM: Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health 1999; 89:1322–1327Crossref, MedlineGoogle Scholar

36. Holtrop JS, Rabin BA, Glasgow RE: Qualitative approaches to use of the RE-AIM framework: rationale and methods. BMC Health Serv Res 2018; 18:177–210Crossref, MedlineGoogle Scholar

37. Creswell JW, Zhang W: The application of mixed methods designs to trauma research. J Trauma Stress 2009; 22:612–621Crossref, MedlineGoogle Scholar

38. Hsieh HF, Shannon SE: Three approaches to qualitative content analysis. Qual Health Res 2005; 15:1277–1288Crossref, MedlineGoogle Scholar

39. VA Informatics and Computing Infrastructure (VINCI). Washington, DC, US Department of Veterans Affairs, 2008Google Scholar

40. Bastien CH, Vallières A, Morin CM, et al.: Validation of the Insomnia Severity Index as an outcome measure for insomnia research. Sleep Med 2001; 2:297–307Crossref, MedlineGoogle Scholar

41. Brooks Holliday S, Hepner KA, Farmer CM, et al.: A qualitative evaluation of Veterans Health Administration’s implementation of measurement-based care in behavioral health. Psychol Serv 2020; 17:271–281Crossref, MedlineGoogle Scholar

42. Scott K, Lewis CC: Using measurement-based care to enhance any treatment. Cogn Behav Pract 2015; 22:49–59Crossref, MedlineGoogle Scholar

43. Lewis CC, Scott K, Marti CN, et al.: Implementing measurement-based care (iMBC) for depression in community mental health: a dynamic cluster randomized trial study protocol. Implement Sci 2015; 10:127Crossref, MedlineGoogle Scholar

44. Peterson K, Anderson J, Bourne D: Evidence Brief: Use of Patient Reported Outcome Measures for Measurement Based Care in Mental Health Decision-Making. Washington, DC, US Department of Veterans Affairs, 2018 Google Scholar