The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
Taking IssueFull Access

How Well Are We Evaluating System Change?

Published Online:https://doi.org/10.1176/ps.50.10.1257

Natural experiments occur all the time in community psychiatry. They are caused by changes in funding, policies, leadership, and law. If we are lucky, we learn about them in advance and try to rigorously study how they affect service utilization, client satisfaction, or symptomatology. Otherwise we can use data archives or post hoc study methods, as does the study by Kamis-Gould and associates in this issue.

The opportunity these field experiments offer is moderated by inherent limitations in study designs. Essentially, we are working with simple pre-post designs, taking measurements after the intervention occurs and comparing them with a baseline. There is nothing wrong with this—it is the heartbeat of deductive science. Virtually every advance in sophisticated designs and statistical procedures is built around improving our faith in concluding that pre-post change can be credited to the intervention, not to something extraneous.

But simple evaluation designs often cannot rule out extraneous causes. By attending to the underlying "theory of change," we can move beyond evaluations as description and make them opportunities to produce generalizable knowledge.

Of greatest help is the use of "logic models." This heuristic device disciplines us to spell out clearly what we think is happening in the field experiment: its environment, what resources apply, how the intervention is expected to operate, and the outcomes it will affect. Natural experiments are inherently rich in competing causes. Logic models also require us to articulate other factors that might influence outcomes. By identifying these variables, we anticipate their measurement. If rental-market conditions influence consumer placement in independent housing, measure them. If service quality reduces utilization, quantify it. Careful conceptualization and relevant data capture are our best analytic defenses.

It is equally critical to monitor the intervention itself. It is surprising how often implementation of a practice can deviate from its critical components, threatening "model fidelity." Did less than the therapeutic dose of a medication get taken? Did "community integration" for a discharged inpatient mean assignment to a residence as intensively structured as the hospital? Without monitoring, or measuring, what interventions actually happened, we may conclude something about the central intervention that was not truly operating.

By rigorously exploiting the evaluation opportunities of natural experiments, we contribute to generalizable findings, leave others with a documented conceptual framework that helps them anticipate similar circumstances, and advance the improvement of our services.