Get Alert
Please Wait... Processing your request... Please Wait.
You must sign in to sign-up for alerts.

Please confirm that your email address is correct, so you can successfully receive this alert.

Taking Issue   |    
How Well Are We Evaluating System Change?
Walter Leginski, Ph.D.; Frances Randolph, Dr.PH.; Debra J. Rog, Ph.D.
Psychiatric Services 1999; doi:
text A A A

Natural experiments occur all the time in community psychiatry. They are caused by changes in funding, policies, leadership, and law. If we are lucky, we learn about them in advance and try to rigorously study how they affect service utilization, client satisfaction, or symptomatology. Otherwise we can use data archives or post hoc study methods, as does the study by Kamis-Gould and associates in this issue.

The opportunity these field experiments offer is moderated by inherent limitations in study designs. Essentially, we are working with simple pre-post designs, taking measurements after the intervention occurs and comparing them with a baseline. There is nothing wrong with this—it is the heartbeat of deductive science. Virtually every advance in sophisticated designs and statistical procedures is built around improving our faith in concluding that pre-post change can be credited to the intervention, not to something extraneous.

But simple evaluation designs often cannot rule out extraneous causes. By attending to the underlying "theory of change," we can move beyond evaluations as description and make them opportunities to produce generalizable knowledge.

Of greatest help is the use of "logic models." This heuristic device disciplines us to spell out clearly what we think is happening in the field experiment: its environment, what resources apply, how the intervention is expected to operate, and the outcomes it will affect. Natural experiments are inherently rich in competing causes. Logic models also require us to articulate other factors that might influence outcomes. By identifying these variables, we anticipate their measurement. If rental-market conditions influence consumer placement in independent housing, measure them. If service quality reduces utilization, quantify it. Careful conceptualization and relevant data capture are our best analytic defenses.

It is equally critical to monitor the intervention itself. It is surprising how often implementation of a practice can deviate from its critical components, threatening "model fidelity." Did less than the therapeutic dose of a medication get taken? Did "community integration" for a discharged inpatient mean assignment to a residence as intensively structured as the hospital? Without monitoring, or measuring, what interventions actually happened, we may conclude something about the central intervention that was not truly operating.

By rigorously exploiting the evaluation opportunities of natural experiments, we contribute to generalizable findings, leave others with a documented conceptual framework that helps them anticipate similar circumstances, and advance the improvement of our services.




CME Activity

There is currently no quiz available for this resource. Please click here to go to the CME page to find another.
Submit a Comments
Please read the other comments before you post yours. Contributors must reveal any conflict of interest.
Comments are moderated and will appear on the site at the discertion of APA editorial staff.

* = Required Field
(if multiple authors, separate names by comma)
Example: John Doe

Web of Science® Times Cited: 1

Related Content
The American Psychiatric Publishing Textbook of Substance Abuse Treatment, 4th Edition > Chapter 33.  >
Gabbard's Treatments of Psychiatric Disorders, 4th Edition > Chapter 21.  >
The American Psychiatric Publishing Textbook of Substance Abuse Treatment, 4th Edition > Chapter 33.  >
The American Psychiatric Publishing Textbook of Geriatric Psychiatry, 4th Edition > Chapter 32.  >
Manual of Clinical Psychopharmacology, 7th Edition > Chapter 1.  >
Topic Collections
Psychiatric News
APA Guidelines