The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
LettersFull Access

Closing the Gap in Evaluation Technology for Outcomes Monitoring

Published Online:https://doi.org/10.1176/appi.ps.56.5.611

To the Editor: Thank you for devoting a major portion of the March 2005 issue of Psychiatric Services to the important area of quality of care. Several articles alluded to the pressing need for a way to measure and report the outcomes of mental health services for patients. Although routine collection of information about outcomes is desirable, when change in patient status is measured over time, it is unclear whether changes are attributable to services or to other causes, such as events in the person's life. The near impossibility of mounting controlled research studies in typical service settings to quantify changes caused by services creates a gap between the information that can be collected and the information psychiatrists need to determine the effects of their services on patients. This gap in evaluation technology is now bridged by assessing service productivity (1) or the extent of changes that occur among patients because of services.

Measures of service productivity have been field tested and described in the literature (2,3,4). They focus on the average amount of change patients report on a brief questionnaire, expressed as a percentage of all changes covered. For example, suppose that services are provided in order to produce ten behavioral changes—improved health, adherence to medication regimens, attainment of stable housing and employment, and so forth. A patient indicates that he or she has changed for the better on three items on the ten-item questionnaire because of the services received but has remained the same on the other seven items. This patient receives a score of 30 percent.

There are several key differences between the current approach to measuring service effectiveness and the assessment of service productivity. With the latter approach, no comparison or control group must be studied simultaneously, and scores do not need to be risk adjusted to account for alternative explanations of the outcomes. Data need to be gathered only once, and scoring of the results is straightforward. The results pertain to changes patients have made as a result of the services they received, thus revealing the impacts that clinicians are having. Clear results can be obtained with as few as five questions, because the answers to the questions are averaged to determine the score. The patterns of answers to the questions can be interpreted to discern which aspects of services need revision. Users of the approach control which questions to ask and how often, although enough time must be allowed for changes due to services to be observable. This approach can be applied by service staff with a minimum of evaluation training to provide timely results for practitioners, supervisors, and agency managers.

Although assessments of service productivity cannot replace multistatus assessments of overall change, they are well suited to monitoring patient outcomes in service settings, because the extent to which patients benefit from services can be utilized in an ongoing manner to improve the quality of care. Therefore, adding a measure of service productivity to existing assessments of patient outcomes is recommended.

Dr. Green is principal advisor of GreenScene Results Group in Fremont, California.

References

1. Heaton H: Productivity in Service Organizations: Organizing for People. New York, McGraw-Hill, 1977Google Scholar

2. Green RS: Assessing the productivity of human service programs. Evaluation and Program Planning 26:21–27, 2003CrossrefGoogle Scholar

3. Green RS, Ellis PT, Lee SS: A city initiative to improve the quality of life for urban youth: how evaluation contributed to effective social programming. Evaluation and Program Planning 28:83–94, 2005CrossrefGoogle Scholar

4. Green RS: Assessment of service productivity in applied settings: comparisons with pre- and post-status assessments of client outcome. Evaluation and Program Planning 28:139–150, 2005CrossrefGoogle Scholar