The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
Taking IssueFull Access

Measurement That Matters—Joy in the Village!

Published Online:https://doi.org/10.1176/appi.ps.70503

The conundrum of measuring appropriate outcomes of the U.S. health care system continues. A decade after Medicaid established a core set of child health indicators, providers have refined their understanding of these challenges. Since its inception, the child core set has had a dearth of child behavioral health outcomes, and consequently, perhaps no other domain needs more timely improvement. Further, evidence supports that optimal behavioral health care depends on an array of supports and providers—the “village.”

In this issue, Zima et al. highlight these realities and the challenges of developing a robust set of measures. Charged with recommending to the California Department of Health Care Services (CA DHCS) appropriate measures of child behavioral health outcomes, Zima et al. included in their methodology a thorough search and grading of the evidence and a modified Delphi panel review. In the end, the Pediatric Symptom Checklist (PSC) was the measure that met all their criteria. The CA DHCS endorsed the use of this measure and the Child and Adolescent Needs and Strengths (CANS) tool, which has much lower scores than the PSC for scientific rigor and Delphi ranking.

The paradox of having both measures selected by the CA DHCS highlights several challenges facing regional, state, and national measurement programs. First, there is potential hazard in implementing measures designed to evaluate the care of an individual child and simultaneously aggregating those scores to evaluate a clinical program, institution, or state system. Results for a given community cohort often reflect underlying community-associated social risk factors, historical trauma, and specific demographic characteristics that confound attributing the resulting score to the effect of any specific treatment. This makes comparison of outcomes across populations challenging. Policy on the appropriate use of risk stratification is debated, and the stratification methods themselves are nascent. Further, when measure results are used to attribute outcomes to a provider for financial incentive, the process measure must not become the “outcome.” There is evidence that financial incentives can improve care processes but not necessarily health outcomes.

Moreover, the two measures that were eventually endorsed speak to health outcomes in very different ways. The PSC tracks changes in emotional and behavioral problems of children, providing an assessment of psychosocial function. To the extent that diagnosis and management for a given child involve collaborative efforts across members of a care team, the PSC might be reflective of care integration (e.g., between primary care and mental health providers). In contrast, the CANS captures several behaviors that point to functional outcomes scored subjectively by a third party. The selection of both tools points to the compromises that are currently required to implement measures: are the measures “fit for purpose,” and if so, what is the purpose?

To advance the aim of selecting the best measures, we suggest adopting a “learning health systems” approach in which one de-emphasizes burdensome data collection and focuses on what can be learned by comparing outcomes across populations and systems. In such a framework, the use of the measures for the purpose of assessing performance would be layered to enable assessment at macrosystem and subsystem levels. Providers would be judged by both their rate of collection of the measures and their subsequent response to lessons derived from performance on those measures. The learning systems would be designed to generate action steps for interdisciplinary teams, and the results of improvement interventions would be tracked. Emphasis would be placed on activities that integrate care across settings and disciplines and on the processes of care that are most likely to yield high-value results.

Measures of child health functioning, agnostic to care settings, should be used to ensure overall progress for children’s behavioral health. Examples include out-of-home foster care placements, kindergarten readiness, and third-grade reading proficiency. Often considered outside of the purview of traditional medical care, these measures have a strong link to upstream processes of care and downstream health outcomes.

Now is the time to modify the accountability paradigm. We should move to a model that supports an integrated learning approach. We know that “it takes a village to raise a child.” It is time to implement measurement systems that appropriately advance the performance and integration of stakeholders in the village. Consider the sense of purpose and joy that would ensue.

Minnesota Department of Human Services, St. Paul, Minnesota (Schiff); Integrated Care Program, Boston Children’s Hospital, and Department of Pediatrics, Harvard Medical School, Boston (Antonelli).
Send correspondence to Dr. Schiff ().