The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×

Abstract

Objective

This study by the International Initiative for Mental Health Leadership Clinical Leaders Project sought to describe ongoing or soon-to-be-established national-level mental health quality measurement programs in 12 participating countries, in order to understand the nature and structure of these programs.

Methods

A survey was distributed to representatives from the participating countries (Australia, Canada, England, Germany, Ireland, Japan, the Netherlands, New Zealand, Norway, Scotland, Taiwan, and the United States). Data included descriptions of qualifying programs and the organizations responsible for them, quality indicators used, entities assessed, sources and means of the programs’ data collection, the level at which data are reported, and how the data are used. Participants were asked to identify which quality domains and subdomains were represented by indicators in each program. Results were analyzed with descriptive statistics.

Results

Thirty-eight programs were identified. Most programs were administered by governmental organizations, focused on hospital care, and used encounter or utilization databases as sources of information. Programs used different methods to identify indicators. Program data were used for various purposes. A wide range of domains of quality were represented in the programs reported, although most commonality was seen in domains associated with high-acuity care, with fewer programs assessing recovery-related domains.

Conclusions

This study found wide variation among established quality assessment programs, which may reflect a focus on local priorities. The goal of this project is to work toward establishing an international framework for mental health quality assessment and thus a means to compare key measures of performance across countries.

International surveys conducted by the World Health Organization (WHO) demonstrate high one-year and lifetime prevalence of mental illness, with interquartile ranges of 9%–17% and 12%–47%, respectively (13). This burden has led the WHO to recommend that countries establish well-formulated mental health policy agendas that include the measurement of quality of mental health services (4). The Institute of Medicine (IOM) has also highlighted the importance of quality measurement infrastructure and accountability in improving mental health service quality (5). However, the assessment of mental health service quality faces multiple obstacles, including the lack of a sufficient evidence base, the lack of sufficient data in existing resources to determine measures of quality, the lack of government prioritization of quality assessment, and the lack of a sufficient quality assessment infrastructure for efficient access to meaningful data (6).

Efforts have been made to establish cross-national or international frameworks for mental health quality assessment to promote standardization of quality measurement and to assist comparative benchmarking. The National Institute for Health and Welfare in Finland (STAKES), coordinating with the European Commission Health Monitoring Program, developed a set of 32 mental health care quality indicators for a European Community comprehensive health monitoring system in the domains of demographic and socioeconomic factors, health status, determinants of health, and health systems (7). The Organization for Economic Co-operation and Development (OECD) Health Care Quality Indicators project used an expert panel to identify 12 key mental health quality indicators for use in international benchmarking across four domains—treatment, continuity of care, coordination of care, and patient outcomes (8). Although not specifically focused on quality assessment, the WHO Assessment Instrument for Mental Health Systems captured structure measures of quality in 42 low- and middle-income countries (9,10). Pincus and colleagues (11) have proposed a mental health quality measurement framework consisting of ten key quality indicators grouped in the IOM quality domains of safety, effectiveness, patient centeredness, timeliness, efficiency, and equity.

The International Initiative for Mental Health Leadership (IIMHL) has also sought to evaluate mental health quality assessment on an international scale. The IIMHL was established initially by mental health leaders in nine developed countries in order to exchange information about effective leadership, management, and operational practices in mental health services delivery and to collaborate in the development of best practices for mental health services. Twelve participating IIMHL countries initiated the Clinical Leaders Project to develop an international framework of mental health care quality measures in order to compare system performance across countries and inform initiatives to transform mental health systems. Prior work by this group includes general descriptions of mental health quality assessment in five IIMHL countries, identification of mental health quality assessment programs in the United States, and an examination of various mental health quality initiatives identified through published articles, government reports, and other gray literature (1219). This study sought to further investigate ongoing or soon-to-be-established quality measurement programs in the countries represented in the IIMHL Clinical Leaders Project, including programs without previously published information or data. Using a survey, we evaluated the structure of these programs, the development of measures, the collection and reporting of data, and the domains of quality assessed by quality indicators.

Methods

Participants

Respondents were representatives from the 12 participating countries (Australia, Canada, England, Germany, Ireland, Japan, the Netherlands, New Zealand, Norway, Scotland, Taiwan, and the United States). Respondents were asked to obtain additional information about established or soon-to-be-implemented mental health quality assessment programs in their country from individuals affiliated with their mental health systems, including government mental health care leaders, quality assessment project leaders, key academicians or researchers, and leaders of nongovernmental health care organizations. A “point person” was identified for each country to coordinate responses from identified sources. There were no inclusion or exclusion criteria for participants. All respondents were fluent in English.

Survey instrument

A survey to assess quality assessment programs was developed and distributed to participants. [A copy of the survey is available online as a data supplement to this article.] The general structure of the survey was based partly on surveys used in evaluating mental health programs in the U.S. Veterans Health Administration (20). Participants were asked to identify mental health quality and performance assessment programs that met all of the following criteria: the programs were designed not only to provide descriptive information about the state of the system but also to present results that measure the quality or performance of the mental health system; the purpose of the programs is to measure quality or performance of the mental health system, not to guide clinical decisions; the programs provide measurements in an ongoing and organized method, rather than being one-time initiatives or assessments; and the programs are intended for quality measurement and use on a national level, rather than on a state, provincial, or local level. However, given variability in mental health care systems in the participating countries, if national-level measures were not generally available, participants were asked to provide information about major state, provincial, or local-level programs instead.

The survey asked participants to provide a description of each program being reported and the administering organization. Participants were then asked to provide information about the derivation of utilized measures, the health care entities being assessed, the sources and means of data collection, the level at which data were reported, and the use of the data.

Participants were also provided with a list of 15 domains of quality assessment, including several specific subdomains within each domain. These domains and subdomains were adapted from the National Inventory of Mental Health Quality Measures established by the Center for Quality Assessment and Improvement in Mental Health, as well as from IIMHL participant recommendations (21). Participants were asked to identify which quality domains and subdomains were represented by the indicators in each program and were asked to provide specific examples of representative indicators, with the option to provide additional domains or subdomains. When participants provided examples of indicators that did not reflect the aforementioned criteria for quality assessment, the identified domain or subdomain was excluded from the analysis.

Study procedure and data analysis

Participants were contacted via e-mail between September 2009 and March 2010 to complete the survey online, via SurveyMonkey.com, or by annotating a Microsoft Word document and returning it. Participants were asked to submit a separate “template” for each reported quality assessment program. Surveys were provided in English only, and the point persons in each country were asked to provide their responses in English. After responses were collected, participants were contacted via e-mail and conference calls, as needed, to clarify responses. Survey results were analyzed in Microsoft Excel with descriptive summary statistics.

Results

A total of 38 qualifying programs were identified in the 12 participating countries. [A list of the programs is available online as a data supplement to this article.] Seventy-nine percent of programs (N=30) were active and provided data for collection, and 16% of programs (N=6) were being planned for future implementation but were at least in the data-collection phase. Sixty-six percent of programs (N=25) were administered by governmental organizations, and 29% (N=11) were administered by nongovernmental organizations, including independent regulators, accrediting or licensing organizations, health insurance providers, research institutes, and professional organizations. The mental health care entities most commonly assessed were hospitals and outpatient mental health care clinics, with fewer programs or countries measuring private health care plans or insurers (Table 1).

Table 1 Mental health service quality and care provider entities assessed in 12 participating countries

EntityPrograms (N=38)Countries (N=12)
N%N%
Hospital (inpatient care)29761192
Local, state, or provincial mental health care systems or programs2258975
Outpatient mental health care clinics19501083
National mental health care system1847758
Individual psychiatrists or psychiatric groups1334758
Emergency care1334758
Early intervention1334758
Specific service or treatment programs (residential treatment or community teams)1232650
Individual primary care physicians or primary care provider groups or clinics1026650
Other individual nonphysician mental health care providers924650
Partial hospitalization program924542
Other entities924650
Crisis management821650
Private health care plans or insurers513433

Table 1 Mental health service quality and care provider entities assessed in 12 participating countries

Enlarge table

Methods to identify and establish quality assessment indicators included literature reviews, forums or consultations with key stakeholders and experts, or adoption from preexisting quality measurement programs. Many programs had specific inclusion criteria for identified measures and included feasibility studies and field testing of determined measures prior to full implementation. In most cases, the organization responsible for the operation of the program was the organization involved in deriving or identifying the quality indicators used (such as the Council of Australian Governments’ National Action Plan on Mental Health and England’s Care Quality Commission’s Periodic Review).

The data collected for the reported programs most commonly came from mental health care utilization databases or registries or from client survey data, and few programs included data directly obtained from physician surveys or insurance claims (Table 2). Results were predominantly reported on a national or regional level (74%, 28 of 38) or on a clinic or organization level (50%, 19 of 38) and were less frequently reported on the level of consumer demographic cohort (34%, 13 of 38), diagnosis (21%, eight of 38), or individual provider (13%, five of 38). Seventy-three percent of programs (28 of 38) presented their data publicly, compared with 13% that did not (five of 38). Sixty-four percent of programs presenting data publicly (18 of 28) presented individualized quality or performance data about the participating care providers or organizations being measured.

Table 2 Sources of data collected by programs and participating countries measuring mental health care quality

SourcePrograms (N=38)Countries (N=12)
N%N%
Database or registry of mental health care utilization or encounters23611083
Client surveys or other direct consumer response16421083
Compilation of patient clinical information (database containing patients’ weights or Global Assessment of Functioning scores)1334650
Chart reviews or abstractions1129650
Regional or national census, mortality, or other regional or national statistics821542
Database or registry of nonclinical utilization or encounters (employment, housing)513542
Insurance claims513542
Physicians or physician group surveys411325

Table 2 Sources of data collected by programs and participating countries measuring mental health care quality

Enlarge table

The data collected by the programs were used in various ways, often related to the structure and intent of the responsible organization. At the time of our survey, many programs were still in the process of collecting and generating data and had not become active. Programs variably used quality data to identify national or regional targets for quality improvement, to compare existing services with identified benchmarks, to create interventions to improve quality performance, and to track changes over time. In particular, the Department of Health in England and the Scottish Integrated Care Pathways for Mental Health used quality data both on the national level to drive countrywide health care policy and on the local level to drive quality improvement and redesign. Programs such as the National Psychiatric In-Patient Reporting System in Ireland and the Uniform Reporting System in the United States were used to compare regions within the respective country, whereas programs such as the Taiwan Quality Indicator Project were used to compare individual hospitals. Several programs were geared toward accreditation procedures for health care organizations, such as Scotland’s Electro-Convulsive Therapy Accreditation Network and the ORYX(R) Performance Measurement Initiative in the United States. Programs reporting data publicly were also used by stakeholders such as consumers, providers, and employers to allow for comparison of providers or health plans (such as the GGZ Transparency Steering Group Performance Indicators in the Netherlands, the National Quality Indicators in Norway, and the Healthcare Effectiveness Data and Information Set in the United States). Some programs, such as Germany’s Ambulatory Quality Indicators and Key Measures, Taiwan’s National Health Insurance Bureau quality measurement program, and the Physician Quality Reporting Initiative in the United States, were used to determine financial incentives for systems or providers based on quality benchmarks.

Table 3 presents data for the domains and subdomains of quality assessment measured by the indicators included in the reported programs. The presence of indicators within the listed domains was highly variable among the programs. No domain was assessed by every participating country. Subdomains measured by two-thirds or more of participating countries included symptom assessment domains (such as bipolar or depressive disorder, substance abuse, and suicide risk), efficiency and continuity measures (including duration of hospitalization, utilization of outpatient services, and inpatient readmission), safety and legal issues (involuntary hospitalization and use of seclusion or restraints), total population mental health care expenditures, and access to emergency care. The subdomain most commonly measured was duration of hospitalization, assessed by 42% of programs and 83% of participating countries. Domains measured by half or fewer countries were recovery, cultural competence, evidence-based pharmacotherapy, nonpharmacological somatic treatment, and substance abuse.

Table 3 Indicators covered by domains of mental health quality assessment in 12 participating countries

Domain and subdomainPrograms (N=38)Countries (N=12)
N%N%
Symptom or diagnostic assessment
 Substance abuse1539867
 Suicide risk1437867
 Bipolar or depressive disorder1334867
 Schizophrenia or other psychotic illness924758
 Other924542
 Anxiety disorder718542
 Total22581083
Evidence-based pharmacotherapy
 Selection of medications821542
 Medication adherence718325
 Polypharmacy718433
 Adequate medication dosage411217
 Occurrence of side effects38217
 Monitoring25217
 Medication reconciliation1318
 Other1318
 Total1334542
Evidence-based psychosocial interventions
 Assertive community treatment821650
 Early intervention programs616542
 Mental health screening616433
 Psychotherapy513433
 Case management513433
 Employment support or assistance513325
 Other513325
 Integrated dual diagnosis treatment411433
 Family psychoeducation411325
 Total1539867
Somatic interventions: electroconvulsive therapy513433
Substance use
 Engagement in care924542
 Quantity or frequency of use616542
 Other1318
 Blood or urine monitoring0000
 Total1232650
General medical care
 Preventive medical care or screening821650
 Chronic illness medical care821650
 Other1318
 Total1129758
Continuity of care
 Inpatient readmission1437975
 Outpatient follow-up after inpatient discharge1026650
 Coordination with outpatient mental health1026650
 Coordination with primary care924542
 Inpatient discharge planning821758
 Coordination with substance abuse treatment821542
 Other411325
 Total26681192
Access measures
 Access to emergency mental health care924867
 Access to and wait times for outpatient services821758
 Access to primary care38325
 Other38325
 Access to and wait times for substance abuse treatment25217
 Total15391192
Efficiency measures
 Duration of hospitalization16421083
 Utilization of outpatient services1129975
 Utilization of substance abuse treatment411433
 Other1318
 Total17451083
Patient safety
 Use of seclusion or restraints1334975
 Medication errors or adverse events616542
 Other616542
 Falls or injuries513542
 Nonmedication adverse events411325
 Total15391192
Forensic or legal issues
 Involuntary or compulsory hospitalization1129867
 Criminal justice encounters411325
 Involuntary or compulsory community treatment38325
 Other25217
 Total1437867
Recovery measures
 Access to peer or consumer services718433
 Shared decision making718542
 Recovery411325
 Other38325
 Total1026650
Outcome assessment
 Functioning1232650
 Client or family satisfaction with care924650
 Change in reported symptoms821650
 General health status821542
 Mortality718542
 Employment or income616433
 Client or family self-assessment616433
 Housing513325
 Other25217
 Total24631083
Cultural or ethnic issues
 Racial or ethnic disparities in care616542
 Training in cultural competency1318
 Access to culturally specific care1318
 Total718650
Population-based resources
 Total expenditure for mental health services for the population1129975
 Mental health workforce (full-time equivalents) for the population821758
 Other25217
 Total1232975

Table 3 Indicators covered by domains of mental health quality assessment in 12 participating countries

Enlarge table

Participants provided examples of subdomains not included in the survey prompts, such as adequate duration of antidepressant treatment, use of measurement-based care, self-injurious behavior, elopement, assessment of dementia, attention-deficit hyperactivity disorder or borderline personality disorder, use of supported housing or occupational therapy, access to crisis resolution or home health services, and screening for patient strengths and wellness.

Discussion

This study builds on prior work by the IIMHL Clinical Leaders Project and other groups (the OECD, STAKES, and WHO) in identifying mental health quality indicators and quality assessment programs and in helping to provide a framework to understand the state of international mental health quality assessment. In particular, this study provides a more in-depth and direct comparison of program-level features of quality assessment, such as methods of deriving quality indicators and uses of quality data. The findings of this study are consistent with prior studies, showing a wide variation among established quality assessment programs in indicator derivation, program administration, and utilization of generated data. This may indicate that programs are focused on local priorities rather than on building a consensus framework for mental health quality assessment. This variability may also be attributed to the significant variation in the organization of mental health care systems and quality assessment infrastructure in the participating countries; data collected on these topics will be reported separately. Further study is required to better understand how individual national priorities may be reflected in the choice of quality assessment methodologies, uses, or domains of focus.

Although prior studies by the IIMHL group have broadly demonstrated variety in the domains covered by quality measurement programs, this study extended beyond the prior work by providing an updated and quantified comparison of specific quality indicator subdomains measured by these programs. The domains and subdomains most commonly assessed by the identified programs in this study were largely related to high-acuity mental health care, such as involuntary hospitalization, inpatient readmission, access to emergency care, and use of seclusion and restraints. These areas are consistent with the finding that a high proportion of the identified programs measure hospital-based care and may reflect either a general international consensus regarding the critical domains of mental health quality measurement or a greater availability of data for indicators measured in these domains. The low proportion of recovery-oriented quality assessment measures, despite the high proportion of programs and countries collecting consumer-level data, is notable given the recent emphasis, including from the IOM, on recovery-oriented, patient-centered services (5). The impact of this misbalance of represented quality domains requires further exploration, but it is concerning because of a potential lack of focus of national policies and resources on recovery-oriented, outpatient, and preventive services. These findings may also indicate the need for a more balanced range of mental health care quality indicators and provide further impetus for the development of a consensus framework for mental health quality assessment.

There are a number of important limitations to this study. The surveys and responses were generated in English only because we had insufficient resources to provide translation; respondents were responsible for translation from native languages. This may have placed a larger burden on respondents from non–English-speaking countries and influenced both the interpretation of the survey prompts and generated responses. Furthermore, the study focused on only a subgroup of developed countries; further study is required to determine how these data relate to mental health systems in low- and middle-income countries. Some reported programs did not have distinct elements separating data collection, measurement development, and data reporting, which may be attributable to countries with more centralized mental health quality structures. The analysis included programs that have been implemented and programs that are still in the implementation phase. The yet-to-be implemented programs were included in the analysis because they are still able to reflect national methods and priorities of mental health quality assessment, such as domains of interest, data sources, and uses of data. We did not identify the feasibility of full implementation of these programs, and therefore the results may be skewed away from truly feasible quality assessment. Also, the reported percentages are overrepresentative of countries without a unified mental health quality assessment infrastructure, with countries such as the United States and Ireland having multiple independent groups measuring quality data and thus more reported programs. In addition, participants from countries with decentralized systems may not have fully identified quality measurement programs in their country, partly due to the nature of their positions in the public mental health system, and therefore may have contributed to a skew in program-level reporting. The study was reliant on participants to report only programs and quality domains that met the established criteria for quality assessment, and independent verification of validity could not be reliably established. Additional important domains and subdomains of mental health quality assessment may exist that were not included in this survey and may require further exploration.

Both this study and prior IIMHL Clinical Leaders Project studies have provided mental health leaders the opportunity to exchange information and to obtain peer-to-peer consultations for the assessment and implementation of quality improvement initiatives. Ultimately, the goal of this project is to establish a consensus international framework for mental health quality assessment and thus a means to compare key measures of performance across countries. The second phase of this project involves examining all of the collected mental health quality indicators and creating a set of core quality measurement concepts based on their validity, importance, and feasibility. Furthermore, we intend to create an international network linking quality measurement groups or organizations within each country to help facilitate quality measurement framework development and assist implementation of the core measures.

Further steps in research and practice will be required to improve quality assessment internationally, including establishing tighter links between process and outcome measurements, increasing use of standardized assessments, expanding use of information technology, delineating benchmarks for comparison across settings, increasing investment in quality research, and integrating mental health quality assessment into the broader framework of health care quality (11). The development of a common framework of mental health quality assessment may be an instrumental first step in the process of refining mental health quality assessment. The results of this study will help provide data to better understand shared priorities for mental health quality assessment and may help identify barriers to development of a common framework.

Conclusions

This study sought to assess existing or soon-to-be implemented mental health quality assessment programs among participating members of the IIMHL Clinical Leaders Project. Thirty-eight qualifying programs from 12 participating countries were identified. Most of the programs assessed were active, were administered by governmental organizations, were focused on hospital care, and used encounter or utilization databases as sources of information. Different methods were used by the programs to identify and establish quality indicators. Program data were generally publicly reported and were used for various purposes by their respective organizations or countries. A wide range of domains of quality were represented in the reported programs, although most commonality was seen in domains associated with high-acuity mental health care, with fewer programs assessing domains related to recovery. These data will help future work intended to establish an international framework for mental health quality assessment.

Dr. Parameswaran is affiliated with the Robert Wood Johnson Foundation Clinical Scholars at the University of California, Los Angeles, 10940 Wilshire Blvd., Suite 710-18, Los Angeles, CA 90024 (e-mail: ). He is also with the Department of Clinical Psychiatry at the Veterans Affairs Greater Los Angeles Healthcare System. Ms. Spaeth-Rublee and Dr. Huynh are with the New York State Psychiatric Institute, New York City.
Dr. Pincus is with the Department of Psychiatry, Columbia University, New York City.

Acknowledgments and disclosures

Funding was provided by grant R25 MH086466 from the National Institute of Mental Health. Additional funding came from government and nongovernment organizations of the countries participating in the IIMHL project (Australia, Canada, England, Germany, Ireland, Japan, the Netherlands, New Zealand, Norway, Scotland, Taiwan, and the United States). The authors thank the members of the IIMHL for their assistance with this project.

Dr. Parameswaran is the recipient of an unrestricted educational grant from the American Psychiatric Institute for Research and Education and Janssen Pharmaceuticals. Dr. Pincus receives travel support as a consultant for Value Options and publication royalties from the American Psychiatric Press and from Lippincott, Williams and Wilkins. The other authors report no competing interests.

References

1 Demyttenaere K, Bruffaerts R, Posada-Villa J, et al.: Prevalence, severity, and unmet need for treatment of mental disorders in the World Health Organization World Mental Health Surveys. JAMA 291:2581–2590, 2004Crossref, MedlineGoogle Scholar

2 Kessler RC, Angermeyer M, Anthony JC, et al.: Lifetime prevalence and age-of-onset distributions of mental disorders in the World Health Organization’s World Mental Health Survey Initiative. World Psychiatry 6:168–176, 2007MedlineGoogle Scholar

3 The Global Burden of Disease: 2004 Update. GenevaWorld Health Organization, 2008. Available at http://www.who.int/healthinfo/global_burden_disease/GBD_report_2004update_full.pdfGoogle Scholar

4 The World Health Report 2001—Mental Health: New Understanding, New Hope. Geneva, World Health Organization, 2001Google Scholar

5 Institute of Medicine: Improving the Quality of Health Care for Mental and Substance-Use Conditions. Washington, DC, National Academies Press, 2006Google Scholar

6 Kilbourne AM, Keyser D, Pincus HA: Challenges and opportunities in measuring the quality of mental health care. Canadian Journal of Psychiatry, Revue Canadienne de Psychiatrie 55:549–557, 2010Crossref, MedlineGoogle Scholar

7 National Institute for Health and Welfare: Establishment of a Set of Mental Health Indicators for European Union. Helsinki, Finland, National Institute for Health and Welfare, 2002Google Scholar

8 Hermann RC, Mattke S, Somekh D, et al.: Quality indicators for international benchmarking of mental health care. International Journal for Quality in Health Care 18(suppl 1):31–38, 2006Crossref, MedlineGoogle Scholar

9 Saxena S, Lora A, van Ommeren M, et al.: WHO’s Assessment Instrument for Mental Health Systems: collecting essential information for policy and service delivery. Psychiatric Services 58:816–821, 2007LinkGoogle Scholar

10 Saxena S, Lora A, Morris J, et al.: Mental health services in 42 low- and middle-income countries: a WHO-AIMS cross-national analysis. Psychiatric Services 62:123–125, 2011LinkGoogle Scholar

11 Pincus HA, Spaeth-Rublee B, Watkins KE: Analysis & commentary: the case for measuring quality in mental health and substance abuse care. Health Affairs 30:730–736, 2011Crossref, MedlineGoogle Scholar

12 Pincus HA, Naber D: International efforts to measure and improve the quality of mental healthcare. Current Opinion in Psychiatry 22:609, 2009Crossref, MedlineGoogle Scholar

13 Brown P, Pirkis J: Mental health quality and outcome measurement and improvement in Australia. Current Opinion in Psychiatry 22:610–618, 2009Crossref, MedlineGoogle Scholar

14 Ito H: Quality and performance improvement for mental healthcare in Japan. Current Opinion in Psychiatry 22:619–622, 2009Crossref, MedlineGoogle Scholar

15 Herbstman BJ, Pincus HA: Measuring mental healthcare quality in the United States: a review of initiatives. Current Opinion in Psychiatry 22:623–630, 2009Crossref, MedlineGoogle Scholar

16 Gaebel W, Janssen B, Zielasek J: Mental health quality, outcome measurement, and improvement in Germany. Current Opinion in Psychiatry 22:636–642, 2009Crossref, MedlineGoogle Scholar

17 Coia D, Glassborow R: Mental health quality and outcome measurement and improvement in Scotland. Current Opinion in Psychiatry 22:643–647, 2009Crossref, MedlineGoogle Scholar

18 Spaeth-Rublee B, Pincus HA, Huynh PT: Measuring quality of mental health care: a review of initiatives and programs in selected countries. Canadian Journal of Psychiatry, Revue Canadienne de Psychiatrie 55:539–548, 2010Crossref, MedlineGoogle Scholar

19 Ruud T: Mental health quality and outcome measurement and improvement in Norway. Current Opinion in Psychiatry 22:631–635, 2009Crossref, MedlineGoogle Scholar

20 Watkins KE, Keyser DJ, Smith B, et al.: Transforming mental healthcare in the Veterans Health Administration: a model for measuring performance to improve access, quality, and outcomes. Journal of Healthcare Quality 32:33–42, 2010Crossref, MedlineGoogle Scholar

21 Hermann RC: Improving Mental Healthcare: A Guide to Measurement-Based Quality Improvement. Washington, DC, American Psychiatric Publishing, 2005Google Scholar