The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
Best PracticesFull Access

Challenges in the Operationalization of Mental Health Quality Measures: An Assessment of Alternatives

Published Online:https://doi.org/10.1176/appi.ps.201600198

Abstract

Interest in measuring the quality of mental health services has increased, but challenges remain in moving from general standards of quality and best practices to specific, implementable quality measures. The International Initiative for Mental Health Leadership identified 656 mental health quality measures and then applied a modified Delphi approach to assess various available alternative quality measures. Panel members considered issues of data source, segmentation, and thresholds. Policy makers and organizations will need to make difficult choices about accountability, purpose, feasibility, and validity in order to operationalize quality measurement. Empirical data can help guide them in this process.

In 2001, the Institute of Medicine published a series of reports on the quality of health care, including several focused on the quality of care in mental health and substance abuse treatment, that identified deficits in quality and the need for developing a quality measurement infrastructure in mental health care systems (1,2). Recent health care reforms in the United States and other countries have accelerated the need for valid mental health quality measures that are feasible to implement.

The goal of quality measurement is to assess the ability of systems to provide structures, processes, and outcomes of health care to improve the health of the population served (3). Successful quality measurement requires a foundation of evidence that is systematically synthesized and translated into clear clinical guidelines. These guidelines must then be operationalized into reliably defined numerators and denominators that can be populated with feasibly accessible data.

Implementing development of quality measures remains a challenge (4) but not because of a shortage of available measures specific to mental health care. A review of available measures across the globe identified 656 mental health quality indicators that met the criteria of having a specifically defined numerator and denominator and an identified data source (5). In many cases, available indicators assessed the same general measurement concept but varied in their definition. These variations may have different resource requirements and data sources, may result in significantly different findings, and may make comparisons across systems of care difficult. Choosing among alternatives requires decisions about what is feasible versus what is ideal, about what purpose quality measures ultimately serve, about what entities will be held accountable for performance, and about what is supported by empirical evidence regarding reliability and validity of the measure.

The Mental Health Quality Indicator Project

The International Initiative for Mental Health Leadership (IIMHL) has sought to create an international framework for quality measurement. IIMHL is an international organization of mental health leaders seeking to exchange information and collaborate in developing best practices. The IIMHL Clinical Leaders Group has led a multiphase process to create and pilot such a framework. This work began with identifying available quality measures and measurement concepts through a review of existing research and “gray” literature and a survey of IIMHL Clinical Leaders Group members (6,7). The 656 identified indicators were collapsed into 36 broader concepts that were assessed by using a modified Delphi process to collaboratively identify a quality measurement framework; details are described in a previous publication (8). This process was similar to an effort by the Organization for Economic Co-operation and Development that used a Delphi panel of academic and administrative experts as part of the Health Care Quality Indicators project (9).

Quality Measure Alternatives and Operationalization: Delphi Process

The next step for the IIMHL Clinical Leaders Group was to attempt to move from the 36 broader concepts to specific, operationalizable quality measures. We returned to the original list of 656 indicators and identified one base indicator and a number of alternatives for each concept, organized around key operationalization issues. The Delphi panel members rated and discussed these alternatives to develop a consensus on these operationalization issues. Table 1 provides examples of some alternatives identified and highlights the decisions needed for successful operationalization.

TABLE 1. Selected consensus quality measure alternatives identified from rating scores by IIMHL Clinical Leaders Group Delphi panel membersa

Issue, measurement concept, and base indicatorAlternative
Source of data
 Medication adherence: N of days with fill for antipsychotic (numerator)÷total days eligible for treatment of persons with schizophrenia over age 19 (denominator)Medication was offered or prescribed (chart review); medication was filled (pharmacy database)b
 Wait times: N of days for all persons from date of referral to date of first mental health visit (numerator)÷total N of persons referred minus those without a visit (denominator)Obtain from administrative datab; obtain from facility or patient survey
Frequency or time frame
 Medication monitoring: patients in denominator with 4 physician visits per year (numerator)÷patients with bipolar disorder receiving ≥1 prescription (denominator)2 visits, 3 visits, or 4 visits a yearb
 Symptom reduction: patients in denominator who within 3 months of a new treatment episode have a documented reduction in score on a standardized assessment (numerator)÷patients with a new treatment episode and ≥2 standardized assessments with same tool within 90 days of episode start (denominator)Within 90 daysb; within 180 days
Denominator limitation
 Polypharmacy: patients in denominator with simultaneous prescriptions for ≥2 oral antipsychotics for ≥90 days during study period (numerator)÷all patients with schizophrenia prescribed ≥1 antipsychotics during study period (denominator)Any diagnosisb; schizophrenia only
 Individualized care plan: total N of inpatients with an individual care plan constructed and regularly reviewed with patient (numerator)÷total N of inpatient separations or discharges (denominator)Only inpatients; only outpatients; inpatients and outpatients (segmented and assessed separately)b
Segment by population characteristics
 Psychotherapy: persons in denominator receiving any psychotherapy during study period (numerator)÷persons with a mental health diagnosis treated in a specialty setting (denominator)Segment by diagnosis (depression, anxiety, bipolar, schizophrenia)b; do not segment
 Criminal justice encounters: N of consumers with ≥1 arrest during fiscal year (numerator)÷total N of consumers receiving services during fiscal year (denominator)Segment and assess adults and children separatelyb; do not segment
Segment by service characteristics
 Seclusion: total N of inpatients secluded (numerator)÷total N of inpatient discharges or separations (denominator)Segment and assess seclusion and restraint episodes separatelyb; do not segment
 Injuries: total N of inpatients with significant injuries (numerator)÷total N of inpatients (denominator)Segment by type of injury (falls or self-injury)b; do not segment

aIIMHL, International Initiative for Mental Health Leadership

bPreferred alternative

TABLE 1. Selected consensus quality measure alternatives identified from rating scores by IIMHL Clinical Leaders Group Delphi panel membersa

Enlarge table

Among the issues weighed by panel members were decisions balancing data source with accountability for the quality measure. In some cases, choosing data sources that support more feasible implementation shifted accountability. For example, using filled prescriptions from pharmacy databases to assess medication adherence instead of more labor-intensive chart reviews of prescribed or offered medications shifts accountability away from providers and onto consumers. Similarly, use of administrative data to determine wait times can improve feasibility of data collection, but it comes at the expense of not using more granular data (such as direct surveys) collected more proximal to providers and consumers directly involved and affected by the measure.

In many cases, panel members made choices about segmentation (that is, “splitting” rather than “lumping”) of measures, either by patient or service characteristics. Segmentation results in a greater administrative burden in terms of data collection and analysis, but the improved granularity can provide a better “fit for purpose” to drive quality improvement. For example, segmentation by type of injury on inpatient psychiatric units by separating the clinically distinct phenomena of falls and self-injury can drive more specific quality improvement interventions for each mechanism of injury. However, in cases in which more quantification was required, there was no evidence base on which to set specific thresholds (for example, four versus two medication visits a year). It is unlikely that valid evidence beyond clinical consensus will be forthcoming.

Conclusions

These efforts by the IIMHL Clinical Leaders Group are an important first step in operationalizing a consensus international set of quality measures and addressing key issues regarding data source, frequency, period of assessment, thresholds, segmentation, and breadth of the denominator. However, there are still large gaps in data on the operationalization of mental health quality measures. Empirical data are needed to determine which of the many alternatives can be successfully implemented, how the choice of specific alternatives affects provider and system behavior, and how choices affect health care provision and outcomes. It is unclear who will shepherd this empirical evaluation, given a high level of fragmentation of responsibility and variation in the organization of mental health quality measurement, both in the United States and internationally (7). Data system incompatibility will also pose a significant barrier to international cooperation on quality measure comparison and benchmarking (10). Further complications for mental health services include diversity in loci, funding, and methods of care and administrative separation between mental health and general medical care (1).

Nevertheless, empirical data alone will not provide all the answers needed. Policy makers and mental health leaders will need to make qualitative decisions on how to balance the ideal of evidence-based measures with strong validity with the feasibility of data access, the people or systems to be held accountable for performance on the measure, and the ultimate purposes served by the measures. In particular, stakeholder input will be required to tailor the specific choice of measures to the intended use, whether it is targeted quality improvement, benchmarking within or across systems, accreditation or maintenance of standards, or public reporting to facilitate consumers’ health care choices. The work thus far by the IIMHL Clinical Leaders Group has included only clinical leaders of public mental health systems; input is needed from a broader array of providers, administrators, and especially consumers. However, this work can provide a window into the process of decision making that will be required for the successful operationalization of mental health quality measurement.

Dr. Iyer is with the VISN2 South Mental Illness Research, Education and Clinical Center, James J. Peters Department of Veterans Affairs Medical Center, and with the Department of Psychiatry, Icahn School of Medicine at Mount Sinai, both in New York City (e-mail: ). Ms. Spaeth-Rublee is with the New York State Psychiatric Institute, New York City. Dr. Pincus is with the Department of Psychiatry, Columbia University, and NewYork–Presbyterian Hospital, both in New York City. Marcela Horvitz-Lennon, M.D., M.P.H., is editor of this column.

Funding was received from the Center for Mental Health Services, Substance Abuse and Mental Health Services Administration. Additional funding came from government and nongovernment organizations of the countries participating in the IIMHL project (Australia, Canada, England, Germany, Ireland, Japan, the Netherlands, New Zealand, Norway, Scotland, Taiwan, and the United States). Funding for Dr. Iyer was provided by the Robert Wood Johnson Foundation Clinical Scholars program and the U.S. Department of Veterans Affairs (VA). This material is the result of work supported with resources and the use of facilities at the West Los Angeles VA Medical Center and the James J. Peters VA Medical Center.

The views expressed are those of the authors and do not necessarily reflect the position or policy of the VA or the U.S. government.

Dr. Iyer reports working as an independent consultant for Truven Health Analytics. Dr. Pincus reports serving as a consultant for the Bizzell Group. Ms. Spaeth-Rublee reports no financial relationships with commercial interests.

The authors thank the following members of IIMHL Clinical Leaders Group for their assistance: Peggy Brown, Alan Rosen, Ruth Vine, Tom Callaly, Peter McGeorge (Australia); David Goldbloom, Rohan Ganguli, Catherine Zahn, Paul Kurdyak (Canada); Hugh Griffiths, Susan O’Connor (England); Wolfgang Gaebel, Jürgen Zielasek (Germany); Martin Rogan, Ian Daly (Ireland); Hiroto Ito (Japan); Jan Tromp, Paul Spronken (Netherlands); Memo Musa, David Chaplow, Lyndy Matthews (New Zealand); Torleif Ruud (Norway); Moira Connolly (Scotland); Joseph J. Cheng (Taiwan); and Joe Parks, Ken Thompson, Kevin Hennessy, and Pete Delaney (United States).

References

1 Institute of Medicine: Improving the Quality of Health Care for Mental and Substance-Use Conditions. Washington, DC, National Academies Press, 2006Google Scholar

2 Institute of Medicine: Psychosocial Interventions for Mental and Substance Use Disorders: A Framework for Establishing Evidence-Based Standards. Washington, DC, National Academies Press, 2015Google Scholar

3 Donabedian A: Evaluating the quality of medical care. Milbank Memorial Fund Quarterly 44(suppl):166–206, 1966Crossref, MedlineGoogle Scholar

4 Kilbourne AM, Keyser D, Pincus HA: Challenges and opportunities in measuring the quality of mental health care. Canadian Journal of Psychiatry 55:549–557, 2010Crossref, MedlineGoogle Scholar

5 Fisher CE, Spaeth-Rublee B, Pincus HA, et al.: Developing mental health-care quality indicators: toward a common framework. International Journal for Quality in Health Care 25:75–80, 2013Crossref, MedlineGoogle Scholar

6 Spaeth-Rublee B, Pincus HA, Huynh PT: Measuring quality of mental health care: a review of initiatives and programs in selected countries. Canadian Journal of Psychiatry 55:539–548, 2010Crossref, MedlineGoogle Scholar

7 Parameswaran S, Spaeth-Rublee B, Huynh PT, et al.: Comparison of national mental health quality assessment programs across the globe. Psychiatric Services 63:983–988, 2012LinkGoogle Scholar

8 Parameswaran SG, Spaeth-Rublee B, Pincus HA: Measuring the quality of mental health care: consensus perspectives from selected industrialized countries. Administration and Policy in Mental Health and Mental Health Services Research 42:288–295, 2015Crossref, MedlineGoogle Scholar

9 Hermann RC, Mattke S: Selecting Indicators for the Quality of Mental Health Care at the Health Systems Level in OECD Countries. OECD Health Technical Papers no 17. Paris, Organization for Economic Co-operation and Development, 2004. http://www.oecd.org/els/health-systems/33865630.pdfGoogle Scholar

10 Armesto SG, Medeiros H, Wei L: Information Availability for Measuring and Comparing Quality of Mental Health Care across OECD Countries. OECD Health Technical Papers no 20. Paris, Organization for Economic Co-operation and Development, 2008. http://www.oecd.org/dataoecd/53/47/41243838.pdfGoogle Scholar