The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
ArticlesFull Access

Evaluating the Implementation of Integrated Mental Health Care: A Systematic Review to Guide the Development of Quality Measures

Published Online:https://doi.org/10.1176/appi.ps.201600464

Abstract

Objective:

Although the effectiveness of integrated mental health care has been demonstrated, its implementation in real-world settings is highly variable, may not conform to evidence-based practice, and has rarely been evaluated. Quality indicators can guide improvements in integrated care implementation. However, the literature on indicators for this purpose is limited. This article reports findings from a systematic review of existing measures by which to evaluate integrated care models in primary care settings.

Methods:

Bibliographic databases and gray literature sources, including academic conference proceedings, were searched to July 2014. Measures used or proposed to evaluate integrated care implementation or outcomes were extracted and critically appraised. A qualitative synthesis was conducted to generate a panel of unique measures and to group these measures into broad domains and specific dimensions of integrated care program performance.

Results:

From 172 literature sources, 1,255 measures were extracted, which were distilled into 148 unique measures. Existing literature frequently reports integrated care program effectiveness vis-à-vis evidence-based care processes and individual clinical outcomes, as well as efficiency (cost-effectiveness) and client satisfaction. No measures of safety of care and few measures of equitability, accessibility, or timeliness of care were located, despite the known benefits of integrated care in several of these areas.

Conclusions:

To realize the potential for quality measurement to improve integrated care implementation, future measures will need to incorporate domains of quality that are presently unaddressed; microprocesses of care that influence effectiveness, sustainability, and transferability of models of care; and client and health care provider perspectives on meaningful measures of quality.

Mental and substance use disorders are the leading cause of years lived with disability worldwide (22.9% of all nonfatal disease burden) (1). Detection and treatment of mental illness is low or delayed, hampered by shortages of trained psychiatrists and other mental health providers, inefficient use of existing health human resources and financial resources, and stigma leading to avoidance of mental health care utilization (24).

Integrated care models involve trained mental health specialists supporting primary care providers to deliver evidence-based mental health care, disease management, and client education. These models have emerged as one solution to the aforementioned problems. Numerous systematic reviews and meta-analyses have demonstrated that integrated care improves access to mental health care, clinical outcomes, and cost-effectiveness of care (48). For one model, known as collaborative care and derived from Wagner’s chronic care model, evidence from numerous randomized controlled trials is particularly robust (811). Some authors have suggested that integrated care models may be the most promising approaches to achieving population impact by reducing the burden of mental illness globally (2,4,12,13).

However, implementation of integrated care models in real-world primary care settings is variable, may not conform to evidence-based practice, and has rarely been evaluated (14). The most vigorously studied models have been patchily implemented, owing in part to organizational, financial, and attitudinal barriers (1518). In addition, other models of integrated care have been adopted without being thoroughly tested (14,19,20). This has led to a lack of clarity regarding the key characteristics of effective integrated care.

In turn, poor or incomplete implementation of integrated care contributes to poor integration of general medical and mental health care, inappropriate variation in clinical care, delayed follow-up after treatment initiation, treatment drop-out, and insufficient improvement in symptoms (1518). High-profile efforts to scale up the collaborative care model have failed to demonstrate improved clinical outcomes, which is consistent with known difficulties transferring complex interventions across diverse contexts (2125). Unfortunately, in these cases, important structures (that is, the conditions under which health care is provided) and care processes that may contribute to outcomes have not been consistently measured and, indeed, have not been well articulated. To meet population mental health needs, it is vital that we identify and close gaps in implementation of integrated care in primary care.

Quality measurement can illuminate gaps in the translation of evidence into practice and identify potential targets for quality improvement (2628). However, there is scant literature on quality frameworks and indicators by which to evaluate integrated care (20,2933). Researchers have attempted to define dimensions of high-quality mental health care delivered in primary care settings, but consideration of integrated care practice and evidence has been limited (29). Other efforts that have focused on integrated care have been limited by use of generic frameworks applicable to health care in general, a small number of measures focused on care processes for single diseases, exclusive focus on client experience and outcomes (foregoing measures of provider, system, or financial outcomes), and emphasis on the chronic care model without regard to identifying its critical components and how they can be transferred across contexts (32,34,35). One recent study queried major U.S. databases of quality measures, seeking those that could be applicable to integrated care (36). Although such measures may more easily gain acceptance, particularly for performance measurement when funding may be at stake, they may not be the most important, comprehensive, or balanced set of measures to evaluate and improve integrated care implementation.

We are developing a quality framework, indicators, and a measurement strategy for integrated care in primary care settings. One definition of quality indicator is “a population-based measure that enables users to quantify the quality of a specific aspect of care by comparing it to evidence-based criteria. Indicators require defining both those patients whose care meets the indicator criteria (the numerator) and those who are eligible for the indicator, or the population of focus (the denominator)” (37). This framework is guided by Donabedian’s (generic) quality framework, which states that the organization and structure of the health care delivery system shapes health care processes, which in turn affect outcomes (26,28). Furthermore, the measures are informed by a seminal report from the Institute of Medicine (IoM) on quality of care, which identified the following six aims for health care: safety, effectiveness, patient centeredness, timeliness, efficiency, and equitability (38). The development of a coherent strategy for assessing implementation of integrated care will pave the way for quality improvement and translational research and will ultimately help eliminate the quality chasm (38).

As a first step toward developing a quality framework and indicators, we conducted a systematic literature review in which we sought to catalog and critically appraise existing quality measures that have been used to evaluate integrated care models implemented in primary care settings. Our specific research questions were as follows: What quality measures to evaluate integrated care delivery exist in the peer-reviewed and gray literature? How have they been implemented? What do they collectively suggest are key characteristics of integrated care program functioning?

Quality indicators outlining an evidence-based standard of care that all integrated care programs should adhere to and detailing how attainment of the standard should be measured are very limited. Thus we sought to examine broadly the ways in which integrated care implementation can be measured (27).

Methods

This study was conducted from May 2014 to July 2016. Institutional review board approval was not required. We initially planned a scoping review to comprehensively inventory relevant measures of integrated care implementation (39,40). We developed a study protocol to guide the review process (PROSPERO registration number CRD42016038387) (41). After data extraction and critical appraisal of the measures and prior to data analysis, we modified our analytic strategy. In light of the extensive and heterogeneous scope of available data, we conducted a qualitative synthesis that forms the beginning of a logic model regarding integrated care program functioning.

Study Eligibility

Our systematic review included published and gray literature meeting all of the following criteria: described mental health care provided in a primary care setting; described an integrated mental health care model, for example, consultation-liaison or collaborative care models; and described any measures that were or could be used to assess the implementation or outcomes of integrated care. Primary care settings were defined as the first point of contact and the locus of responsibility for health care delivered to a population of clients over time (42). Integrated care models were defined with reference to the typology and parameters described by the Agency for Healthcare Research and Quality (AHRQ) (7,30). All study designs were included along with reports that did not present original research.

Search Strategy and Screening

In collaboration with a librarian, we developed the search strategy, which another librarian independently reviewed (43). We identified literature published in English and indexed in the electronic databases MEDLINE, EMBASE, PsycINFO, CINAHL, and PubMed prior to July 3, 2014, using subject headings and key words encompassing integrated care AND primary care AND mental health care AND quality measurement. We retrieved gray literature through Google searches by using the same terms and through Web sites of relevant organizations and academic conferences. Finally, we searched the reference lists of all included sources. Two research team members (AI and a staff member) independently screened abstracts and then full texts of selected articles for inclusion. [A sample search string, a list of Web-based gray literature sources, and a PRISMA diagram are included in Online Supplement 1.] Conflicts were resolved by team consensus. Interrater reliability was assessed with the kappa statistic (44).

Data Collection and Critical Appraisal

We extracted all measures that evaluated the structure, process, or outcomes (28) of integrated care in primary care settings. Using a standard data collection form, we extracted study characteristics, population, setting, measures and their classification by Donabedian and IoM domain, and details of measures used (for example, data source, measurement method, and scales). For each source, one research team member (AI or a staff member) extracted data and a second reviewer (NS or AGR) validated the data collected. We organized the citations and data using DistillerSR software.

Given our goal of constructing a quality framework and indicators, we critically appraised each measure found in the literature from the perspective of characteristics of good indicators, using Stelfox and Straus’ (45,46) scale, which is based on instruments from AHRQ and RAND and which includes the following specific dimensions: targets important improvements, precisely defined and specified, reliable, valid, appropriate risk adjustment (for outcome measures only), reasonable costs for data collection effort, and results can be easily interpreted [see Online Supplement 1 for details]. Following Stelfox and Straus’ recommended method, we assigned each measure a score between 1 and 9 for each dimension, where a score from 1 to 3 is considered disagreement on that dimension, 4 to 6 is considered neutral, and 7 to 9 is considered agreement on that dimension. A low score overall for a specific dimension would denote low quality. We indicated “unknown” where there was insufficient information. In addition, each measure was assigned an overall score between 1 and 9: a score from 1 to 3 suggested that the measure is unnecessary, 4 to 6 suggested that the measure could be supplemental, and 7 to 9 suggested that the measure is necessary. We conducted targeted searches of primary literature when necessary (for example, to ascertain the reliability or validity of a measure).

Data Analysis

Qualitative systematic reviews may vary along a spectrum whereby analysis may be primarily integrative (summarizing data into accepted, well-defined categories) or primarily interpretive (inductive development of concepts and theory) (47,48). Our qualitative synthesis was primarily integrative and data driven and was conducted in two stages. First, we conducted a content analysis to group the diverse measures of integrated care implementation that we found into unique measures. For example, we summarized these three found measures—clients are offered choices of treatment modalities, clients receive copies of their records, and care considers health literacy—as a measure of clients’ engagement in their own care, e.g., active participation in care and treatment plan. This stage was primarily led by one author (AI), in regular consultation with the lead author (NS). We used descriptive statistics to summarize the types and quality of indicators found.

Second, through a thematic analysis and using the constant comparative method, we inductively grouped the unique measures into broad domains and specific dimensions of integrated care program performance (49). For example, the measure of clients’ engagement in their own care informed the development of several themes, including client centeredness and the chronic care model (subtheme: informed, activated client). The lead author (NS) primarily conducted the thematic analysis stage in regular consultation with the research team. Thus the content analysis captured the frequency with which particular indicators appeared in the literature, whereas the thematic analysis explored the ability of different themes to describe integrated care programs and their functioning (48).

Results

We identified 3,761 literature sources, of which 197 met inclusion criteria. We achieved substantial agreement for abstract screening (kappa scores, 75%–98%) and for full-text screening (kappa scores, 90%–99%). Included literature was heavily weighted toward disease-specific studies, especially randomized controlled trials, published in the United States (Table 1). For sources that were literature reviews (N=24), we extracted data from the primary studies only, and for one source there was insufficient data. From the remaining 172 sources, we extracted 1,255 implementation and outcome measures, which we grouped into 148 unique measures. [A spreadsheet available in Online Supplement 2 presents the 148 unique measures and validated scales used to measure them, when available. Full details about the 1,255 found measures and bibliographic references are available from the authors.]

TABLE 1. Characteristics of 172 articles included the literature review

CharacteristicN%
Publication status
 Published13377
 Unpublished (gray)3923
Where published
 United Statesa12170
 Canada2515
 United Kingdom127
 Australia74
 Europe64
 New Zealand11
Study type
 Randomized controlled trial7242
 Cluster-randomized controlled trial32
 Nonrandomized controlled trial53
 Prospective cohort design85
 Retrospective cohort design42
 Case-control32
 Cross-sectional study95
 Before-after study without comparison1911
 Comparative, randomized study21
 Qualitative64
 Not a research study3822
 Other32
Study population restricted by diagnosis10963
Diagnosesb
 Depression8578
 Anxiety33
 Dementia22
 Multiple87
 Other1110

aOne study from Puerto Rico

bDiagnoses were reported in 109 articles.

TABLE 1. Characteristics of 172 articles included the literature review

Enlarge table

Literature frequently reported on the evaluation of individual clinical outcomes, such as depression symptom severity, health status, and level of functioning; cost-effectiveness, such as the incremental cost of reducing depression symptoms or increasing quality-adjusted life years; and evidence-based care processes, such as appropriateness and adequacy of pharmacotherapy for a specific illness condition. Therefore, a very strong emphasis on measuring clinical effectiveness was evident, along with some emphasis on efficiency (IoM domains); emphasis on process and outcome measures was roughly equal (Donabedian framework) (Tables 2 and 3).

TABLE 2. Characteristics of 148 unique measures distilled from measures identified in the literature review

CharacteristicN
Donabedian framework
 Structural21
 Process59
 Outcome68
Institute of Medicine frameworka
 Effective97
 Efficient36
 Timely7
 Patient centered25
 Safe0
 Equitable11
Other: culture of health care13
Ever implemented
 Yes122
 No26

aThe breakdown by Institute of Medicine (IoM) domain exceeds 148 because 41 unique measures were assigned a primary and secondary IoM domain.

TABLE 2. Characteristics of 148 unique measures distilled from measures identified in the literature review

Enlarge table

TABLE 3. Domains of 148 unique measures distilled from measures identified in the literature review

DomainStructureProcessOutcome
Effective164633
Efficient2118
Timely040
Patient centered0513
Safe000
Equitable012
Other: culture of health care322

TABLE 3. Domains of 148 unique measures distilled from measures identified in the literature review

Enlarge table

Apart from patient-reported outcome measures of level of functioning and quality of life, which we categorized as effectiveness measures, client centeredness was represented through unidimensional scales of satisfaction with care, economic impact on clients (for example, direct and indirect costs of care, financial or housing status, and employability), and, rarely, clients’ engagement in their care or in program design or quality improvement.

We did not locate any measures of patient safety and found few measures of equitability or of accessibility or timeliness of care (for example, measures addressing vulnerable populations, stigma, and wait times). However, we identified several measures that did not fit with the IoM domains but rather reflected provider experience and the culture of health care delivery, such as health care provider buy-in and engagement in integrated care delivery, confidence in providing care within the integrated care model, and satisfaction with services. Most measures have been implemented, although some, predominantly from the gray literature, were recommended but not implemented.

With respect to critical appraisal, the found measures were highly variable in their quality. It was necessary to examine the original 1,255 found measures. Although they were aggregated into 148 unique measures, each of the unique measures may include examples that were defined and measured slightly differently, leading to variable quality scores for each unique measure. Furthermore, in this study we focused on measures that were actually implemented, which were more thoroughly described. For the 841 measures implemented, 30 (4%) were assigned critical appraisal scores of 1 to 3, 404 (48%) were assigned scores of 4 to 6, 385 (46%) were assigned scores of 7 to 9, and 22 (3%) were not assigned an overall score for quality because of the lack of reporting and missing data in the original citation. Generally, the highest-quality measures were those that evaluated individual outcomes of effectiveness by using validated measurement scales (for example, of psychiatric symptoms, physical symptoms, level of functioning, and quality of life). For other measures, common limitations included imprecise specification, lack of evidence of reliability or validity, lack of risk adjustment (for outcome measures), and high burden of measurement.

The thematic analysis of the quality measures yielded broad domains and specific dimensions of integrated care program functioning and impact that may be important to measure (Figure 1).

FIGURE 1.

FIGURE 1. Visualization of thematic analysis, with domains of existing implementation and outcome measures for integrated care

Discussion

In this study, we comprehensively reviewed and analyzed existing measures by which to evaluate the implementation of integrated care programs. We identified key elements of integrated care and specific examples of quality measures that can be used to inform program development and evaluation, quality improvement, and the design of future research studies. Our thematic analysis and visualization invite users to consider the comprehensiveness and complementarity of measures that they may use.

This study had several strengths. First, we considered the broad range of integrated care models that have been implemented in primary care settings in English-speaking countries globally. Second, we included indicators regardless of implementation status and whether they were applied in trials or real-world settings. Third, we employed a rigorous search strategy to exhaustively locate the aforementioned types of indicators. Thus our study provides a unique and far-reaching summary of existing and proposed quality measures of integrated care.

In contrast with Goldman and colleagues’ (36) search of the National Quality Forum and National Quality Measures Clearinghouse databases, our approach captured measures that have been used or proposed to be used specifically to measure integrated care implementation or outcomes from a wide variety of sources, rather than measures that could be repurposed toward this goal. Similarly, our scope incorporated measures from client and family, provider, program, and population and system perspectives, expanding beyond those currently included in the AHRQ Atlas of Integrated Behavioral Health Care Quality Measures (34). However, the drawback of our comprehensive sweep was the inclusion of lower-quality measures (for example, measures that were ill defined or infeasible) that were identified during the critical appraisal process.

Several limitations of our study should also be considered. The concepts explored in our study—primary care, integrated care, and quality of care—are all somewhat complex; our choices in defining the boundaries and search terms of these concepts influenced the literature and measures we located. Our analysis was driven by found implementation and outcome measures in the literature and aimed to summarize these measures into accepted categories. Even though our analysis explored emerging themes and areas of overlap, our approach was not theory building and does not address causality. Other authors of systematic reviews have also identified a knowledge gap regarding the active ingredients of integrated care interventions, and different methods may be needed to delineate these (for example, realist reviews and innovative trial designs) (8). Researchers seeking to test hypotheses regarding the necessary structures and processes to achieve intended outcomes of integrated care can refer to the visualization (Figure 1) and database [Online Supplement 2] to inform theory development and investigation.

The body of literature we reviewed yielded a multitude of measures and captured diverse perspectives on integrated care program implementation and outcomes, and at least some measures were of high quality. Clinical and administrative leaders could use measures from our database to understand how their integrated care program is functioning and to monitor the results of quality improvement and program development efforts. For a comprehensive program evaluation, we recommend using multiple measures capturing diverse perspectives (as depicted in Figure 1) and several different IoM and Donabedian domains of quality. For a focused quality improvement project, a small number of measures could suffice, with one to two process measures, a balancing measure, and, importantly, an outcome measure. However, although certain types of indicators are frequently represented in the literature, frequency should not be interpreted as confirmation that they are the optimal or only targets for measurement. Indeed, some measures may be prevalent because they are easier to implement (looking under the proverbial lamppost) or because of funders’ emphasis on cost control. For example, many measures are disease specific, which may not translate well into real-world implementation of integrated care programs targeting multiple (and often comorbid) conditions. Furthermore, caution is warranted regarding the potential unintended consequences of focusing on the domains of effectiveness and efficiency at the expense of other domains. Aspects of quality that are underrepresented in the existing literature (for example, equity, accessibility and timeliness, and client centeredness of care) are vital aims of integrated care; some successes have been demonstrated, and further evaluation is merited (5054).

Some components of integrated care functioning that are key to implementation—such as scaling up and sustainability in real-world settings—may be overlooked. Many of the indicators we found were implemented in randomized controlled trials of efficacy, whereas local contextual factors and specific unmeasured processes embedded within trials are often underrecognized and underreported (2123,25). For example, it is possible that intricate processes of communication, collaboration, and coordination that make integrated care work and that require role changes and different types of working relationships may be missed. Our proposed new quality domain—the “culture of health care”—highlights and begins to address this gap. This concept is also reflected in other overarching quality frameworks, such as the quadruple aim, which extends the Institute for Healthcare Improvement’s triple aim of improving patient experience and population health and reducing health care costs by incorporating provider experience (55). Notably, such factors may not only be ignored but may in fact be subverted when performance measures are used for external accountability and funding, as other authors have proposed for integrated care (32,36,56).

These concerns speak to the importance of exploring the experiences and perspectives of clinicians and clients in the field. Qualitative studies may yield further insights into the aspects of care that matter to clients, as well as the indicators relevant to implementation success or failure that may not be measured at present. Consultation with experts and key stakeholders could also help identify areas for further indicator development, as well as prioritize from among the many available avenues for measurement those that are postulated to be most influential on overall quality of care. As next steps, we plan to interview health care providers for their perspectives on implementing integrated care in real-world contexts and clients for their perspectives on experience and outcome measures that matter to them. Finally, we will engage diverse experts and stakeholders in forming a consensus on a quality framework for integrated care to guide the future development of relevant, actionable, high-quality measures.

Conclusions

Meaningful and valid quality measurement has been identified as key to promoting uptake of robustly evidenced models that improve outcomes and to encouraging more cost-effective care (32). To realize the potential for quality measures to guide and effect improvements in integrated care implementation, we will need to broaden the scope of measures to incorporate key domains of quality that largely remain unaddressed (for example, equity, access, timeliness, and safety); microprocesses of care that influence effectiveness, sustainability, and transferability of integrated care models; and client perspectives on important domains and dimensions of quality. We concur with other authors regarding the importance of measuring care processes that affect outcomes (highlighting the need to test these hypothesized relationships), improving information systems, developing health care provider capacity, and engaging clients in health care system design and improvement (33). Finally, measures will need to be developed and tested for feasibility, applicability, and ability to drive improvements in real-world settings.

Dr. Sunderji is with the Mental Health and Addictions Service, St. Michael’s Hospital, and with the Department of Psychiatry, University of Toronto, Toronto; Ms. Ion is with the Mental Health Research Department at St. Michael’s Hospital and the School of Social Work at McMaster University; Dr. Ghavam-Rassoul is with the Department of Family and Community Medicine at St. Michael’s Hospital and the University of Toronto and the Dalla Lana School of Public Health at the University of Toronto; Dr. Abate is with the Department of Psychiatry at the University of Toronto.
Send correspondence to Dr. Sunderji (e-mail: ).

This research was funded by the Ontario Ministry of Health and Long Term Care through the Alternate Funding Program Innovation Fund.

The opinions, results and conclusions reported in this article are those of the authors and are independent from the funding source. The funders had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript.

The authors report no financial relationships with commercial interests.

The authors are grateful to Sydney Dy, M.D., M.Sc., and Elizabeth Lin, Ph.D., for feedback on drafts of the article. The authors also thank Anjana Aery for contributions to the literature searching and data collection.

References

1 Whiteford HA, Degenhardt L, Rehm J, et al.: Global burden of disease attributable to mental and substance use disorders: findings from the Global Burden of Disease Study 2010. Lancet 382:1575–1586, 2013Crossref, MedlineGoogle Scholar

2 Saxena S, Thornicroft G, Knapp M, et al.: Resources for mental health: scarcity, inequity, and inefficiency. Lancet 370:878–889, 2007Crossref, MedlineGoogle Scholar

3 Human Resources and Training in Mental Health. Geneva, World Health Organization, 2005. http://www.who.int/mental_health/policy/Training_in_Mental_Health.pdfGoogle Scholar

4 WHO/Wonca Joint Report: Integrating Mental Health Into Primary Care—A Global Perspective. Geneva, World Health Organization, 2008. http://www.who.int/mental_health/policy/services/mentalhealthintoprimarycare/en/Google Scholar

5 Archer J, Bower P, Gilbody S, et al.: Collaborative care for depression and anxiety problems. Cochrane Database of Systematic Reviews 10:CD006525, 2012MedlineGoogle Scholar

6 Butler M, Kane RL, McAlpine D, et al: Integration of Mental Health/Substance Abuse and Primary Care. Evidence Report/Technology Assessment 173. Rockville, MD, Agency for Healthcare Research and Quality, 2008. https://www.ahrq.gov/downloads/pub/evidence/pdf/mhsapc/mhsapc.pdfGoogle Scholar

7 Gilbody S, Bower P, Fletcher J, et al.: Collaborative care for depression: a cumulative meta-analysis and review of longer-term outcomes. Archives of Internal Medicine 166:2314–2321, 2006Crossref, MedlineGoogle Scholar

8 Woltmann E, Grogan-Kaylor A, Perron B, et al.: Comparative effectiveness of collaborative chronic care models for mental health conditions across primary, specialty, and behavioral health care settings: systematic review and meta-analysis. American Journal of Psychiatry 169:790–804, 2012LinkGoogle Scholar

9 Katon WJ, Lin EHB, Von Korff M, et al.: Collaborative care for patients with depression and chronic illnesses. New England Journal of Medicine 363:2611–2620, 2010Crossref, MedlineGoogle Scholar

10 Unützer J, Katon W, Callahan CM, et al.: Collaborative care management of late-life depression in the primary care setting: a randomized controlled trial. JAMA 288:2836–2845, 2002Crossref, MedlineGoogle Scholar

11 Katon W, Von Korff M, Lin E, et al.: Collaborative management to achieve treatment guidelines: impact on depression in primary care. JAMA 273:1026–1031, 1995Crossref, MedlineGoogle Scholar

12 Chisholm D, Sanderson K, Ayuso-Mateos JL, et al.: Reducing the global burden of depression: population-level analysis of intervention cost-effectiveness in 14 world regions. British Journal of Psychiatry 184:393–403, 2004Crossref, MedlineGoogle Scholar

13 Lund C, Tomlinson M, Patel V: Integration of mental health into primary care in low- and middle-income countries: the PRIME mental healthcare plans. British Journal of Psychiatry 208(suppl 56):s1–s3, 2016Crossref, MedlineGoogle Scholar

14 Mulvale G, Danner U, Pasic D: Advancing community-based collaborative mental health care through interdisciplinary Family Health Teams in Ontario. Canadian Journal of Community Mental Health 27:55–73, 2008CrossrefGoogle Scholar

15 Bauer AM, Azzone V, Goldman HH, et al.: Implementation of collaborative depression management at community-based primary care clinics: an evaluation. Psychiatric Services 62:1047–1053, 2011LinkGoogle Scholar

16 Kilbourne AM, Schulberg HC, Post EP, et al.: Translating evidence-based depression management services to community-based primary care practices. Milbank Quarterly 82:631–659, 2004Crossref, MedlineGoogle Scholar

17 Chaney EF, Rubenstein LV, Liu C-F, et al.: Implementing collaborative care for depression treatment in primary care: a cluster randomized evaluation of a quality improvement practice redesign. Implementation Science 6:121, 2011Crossref, MedlineGoogle Scholar

18 Knowles SE, Chew-Graham C, Coupe N, et al.: Better together? A naturalistic qualitative study of inter-professional working in collaborative care for co-morbid depression and physical health problems. Implementation Science 8:110, 2013Crossref, MedlineGoogle Scholar

19 Jeffries V, Slaunwhite A, Wallace N, et al: Collaborative Care for Mental Health and Substance Use Issues in Primary Health Care: Overview of Reviews and Narrative Summaries. Ottawa, Mental Health Commission of Canada, 2013. http://www.mentalhealthcommission.ca/sites/default/files/PrimaryCare_Overview_Reviews_Narrative_Summaries_ENG_0.pdfGoogle Scholar

20 Huskamp HA, Iglehart JK: Mental health and substance-use reforms: milestones reached, challenges ahead. New England Journal of Medicine 375:688–695, 2016Crossref, MedlineGoogle Scholar

21 Solberg LI, Crain AL, Maciosek MV, et al.: A stepped-wedge evaluation of an initiative to spread the collaborative care model for depression in primary care. Annals of Family Medicine 13:412–420, 2015Crossref, MedlineGoogle Scholar

22 Tomoaia-Cotisel A, Scammon DL, Waitzman NJ, et al.: Context matters: the experience of 14 research teams in systematically reporting contextual factors important for practice change. Annals of Family Medicine 11(suppl 1):S115–S123, 2013Crossref, MedlineGoogle Scholar

23 Wells M, Williams B, Treweek S, et al.: Intervention description is not enough: evidence from an in-depth multiple case study on the untold role and impact of context in randomised controlled trials of seven complex interventions. Trials 13:95, 2012Crossref, MedlineGoogle Scholar

24 Bayliss EA, Bonds DE, Boyd CM, et al.: Understanding the context of health for persons with multiple chronic conditions: moving from what is the matter to what matters. Annals of Family Medicine 12:260–269, 2014Crossref, MedlineGoogle Scholar

25 Ling T, Brereton L, Conklin A, et al.: Barriers and facilitators to integrating care: experiences from the English Integrated Care Pilots. International Journal of Integrated Care 12:e129, 2012Crossref, MedlineGoogle Scholar

26 McDonald KM, Sundaram V, Bravata DM, et al: Closing the Quality Gap: A Critical Analysis of Quality Improvement Strategies. Technical Reviews no 9.7. Rockville, MD, Agency for Healthcare Research and Quality, 2007. http://www.ncbi.nlm.nih.gov/books/NBK44015/Google Scholar

27 The Good Indicators Guide: Understanding How to Use and Choose Indicators. London, National Health Service, Institute for Innovation and Improvement, 2008. http://www.apho.org.uk/resource/item.aspx?RID=44584Google Scholar

28 Donabedian A: An Introduction to Quality Assurance in Health Care. New York, Oxford University Press, 2003Google Scholar

29 Continuous Enhancement of Quality Measurement in Primary Mental Health Care: Closing the Implementation Loop: A Primary Health Care Transition Fund National Envelope Project. Vancouver, British Columbia, CEQM Project, 2006. http://www.ceqm-acmq.ca/ceqm/Google Scholar

30 Miller BF, Kessler R, Peek CJ, et al.: A Framework for Collaborative Care Metrics: A National Agenda for Research; in Collaborative Care. Rockville, MD, Agency for Healthcare Research and Quality, 2011. http://www.ahrq.gov/research/findings/final-reports/collaborativecare/collab2.htmlGoogle Scholar

31 Durbin A, Durbin J, Hensel JM, et al.: Barriers and enablers to integrating mental health into primary care: a policy analysis. Journal of Behavioral Health Services and Research 43:127–139, 2016Crossref, MedlineGoogle Scholar

32 Goldman ML, Spaeth-Rublee B, Pincus HA: Quality indicators for physical and behavioral health care integration. JAMA 314:769–770, 2015Crossref, MedlineGoogle Scholar

33 Pincus HA, Scholle SH, Spaeth-Rublee B, et al.: Quality measures for mental health and substance use: gaps, opportunities, and challenges. Health Affairs 35:1000–1008, 2016CrossrefGoogle Scholar

34 Korsen N, Narayanan V, Mercincavage L, et al: Atlas of Integrated Behavioral Health Care Quality Measures. Rockville, MD, Agency for Healthcare Research and Quality, 2013. https://integrationacademy.ahrq.gov/resources/ibhc-measures-atlasGoogle Scholar

35 McCusker J, Yaffe M, Sussman T, et al.: Developing an evaluation framework for consumer-centred collaborative care of depression using input from stakeholders. Canadian Journal of Psychiatry 58:160–168, 2013Crossref, MedlineGoogle Scholar

36 Goldman ML, Spaeth-Rublee B, Nowels AD, et al.: Quality measures at the interface of behavioral health and primary care. Current Psychiatry Reports 18:39, 2016Crossref, MedlineGoogle Scholar

37 Varieties of Measures in NQMC. Rockville, MD, Agency for Healthcare Research and Quality, National Quality Measures Clearinghouse, 2016. http://www.qualitymeasures.ahrq.gov/tutorial/varieties.aspxGoogle Scholar

38 Institute of Medicine Committee on Crossing the Quality Chasm: Adaptation to Mental Health and Addictive Disorders: Improving the Quality of Health Care for Mental and Substance-Use Conditions. Washington, DC, National Academies Press, 2006. http://www.ncbi.nlm.nih.gov/books/NBK19830Google Scholar

39 Arksey H, O’Malley L: Scoping studies: towards a methodological framework. International Journal of Social Research Methodology 8:19–32, 2005CrossrefGoogle Scholar

40 Levac D, Colquhoun H, O’Brien KK: Scoping studies: advancing the methodology. Implementation Science 5:69, 2010Crossref, MedlineGoogle Scholar

41 Sunderji N, Ghavam-Rassoul A, Ion A, et al: Evaluating the implementation of collaborative mental health care: a systematic review of quality measures. York, United Kingdom, University of York, Centre or reviews and Dissemination, 2016. http://www.crd.york.ac.uk/PROSPERO/display_record.asp?ID=CRD42016038387Google Scholar

42 Starfield BH: Primary Care: Balancing Health Needs, Services, and technology. New York, Oxford University Press, 1998Google Scholar

43 Sampson M, McGowan J, Cogo E, et al.: An evidence-based practice guideline for the peer review of electronic search strategies. Journal of Clinical Epidemiology 62:944–952, 2009Crossref, MedlineGoogle Scholar

44 Norman GR, Streiner DL: Biostatistics: The Bare Essentials, 3rd ed. Hamilton, Ontario, BC Decker Inc, 2008Google Scholar

45 Stelfox HT, Straus SE: Measuring quality of care: considering measurement frameworks and needs assessment to guide quality indicator development. Journal of Clinical Epidemiology 66:1320–1327, 2013Crossref, MedlineGoogle Scholar

46 Stelfox HT, Straus SE: Measuring quality of care: considering conceptual approaches to quality indicator development and evaluation. Journal of Clinical Epidemiology 66:1328–1337, 2013Crossref, MedlineGoogle Scholar

47 Noblit GW: Meta-ethnography: synthesizing qualitative studies. Newbury Park, CA, Sage, 1988CrossrefGoogle Scholar

48 Dixon-Woods M, Agarwal S, Jones D, et al.: Synthesising qualitative and quantitative evidence: a review of possible methods. Journal of Health Services Research and Policy 10:45–53, 2005Crossref, MedlineGoogle Scholar

49 Strauss AL, Corbin JM: Basics of Qualitative Research : Grounded Theory Procedures and Techniques. Newbury Park, CA, Sage, 1990Google Scholar

50 Wells K, Sherbourne C, Schoenbaum M, et al.: Five-year impact of quality improvement for depression: results of a group-level randomized controlled trial. Archives of General Psychiatry 61:378–386, 2004Crossref, MedlineGoogle Scholar

51 Miranda J, Azocar F, Organista KC, et al.: Treatment of depression among impoverished primary care patients from ethnic minority groups. Psychiatric Services 54:219–225, 2003LinkGoogle Scholar

52 Gum AM, Areán PA, Hunkeler E, et al.: Depression treatment preferences in older primary care patients. Gerontologist 46:14–22, 2006Crossref, MedlineGoogle Scholar

53 Dwight-Johnson M, Unutzer J, Sherbourne C, et al.: Can quality improvement programs for depression in primary care address patient preferences for treatment? Medical Care 39:934–944, 2001Crossref, MedlineGoogle Scholar

54 Dwight-Johnson M, Lagomasino IT, Hay J, et al.: Effectiveness of collaborative care in addressing depression treatment preferences among low-income Latinos. Psychiatric Services 61:1112–1118, 2010LinkGoogle Scholar

55 Bodenheimer T, Sinsky C: From triple to quadruple aim: care of the patient requires care of the provider. Annals of Family Medicine 12:573–576, 2014Crossref, MedlineGoogle Scholar

56 Freeman T: Using performance indicators to improve health care quality in the public sector: a review of the literature. Health Services Management Research 15:126–137, 2002Crossref, MedlineGoogle Scholar