The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
Published Online:

Considerable evidence suggests that self-reports of psychiatric illness and substance abuse can be quite accurate if used with tools designed to heighten disclosure, such as self-administration ( 1 , 2 ) and a private context within which to respond with confidentiality and anonymity ( 3 , 4 , 5 ). Little, however, is known about how well recipients of psychiatric or substance abuse services self-report their care, despite the widespread use of self-reports in surveys such as the National Comorbidity Survey ( 6 ), the National Survey on Drug Use and Health ( 7 ), and the National Health Interview Survey ( 8 ).

The purpose of this study was to determine the accuracy of self-reports of treatment for psychiatric and substance disorders (hereinafter called behavioral health services) by comparing these reports with the agency records of Medicaid managed care enrollees. We also investigated whether accuracy varied by characteristics often used to classify service users. Because collecting self-reports is straightforward and inexpensive compared with medical record extraction, demonstrating the accuracy of self-reports would justify their use in future investigations.

Methods

The data were collected from a survey that was conducted between September 2000 and February 2001 as part of a field test of the Experience of Care and Health Outcomes (ECHO) survey that was undertaken to refine the instrument. The items in the ECHO are from the Consumer Assessment of Behavioral Health Survey and the Mental Health Statistics Improvement Program ( 9 ).

A random sample of 2,500 patients older than 17 years who received behavioral health services for a behavioral health diagnosis between January 1 and June 30, 2000, and who were enrolled in a Medicaid managed care program in Minnesota were selected to participate. Selection was based on diagnosis-related groups and ICD-9 codes and on the guidelines of the ECHO development team ( 10 ). Individuals with severe mental illness were not recipients of managed care and therefore were not part of the sample. Potential respondents were mailed a questionnaire. Nonrespondents were contacted by telephone to encourage them to return completed questionnaires or to complete the survey by telephone. Details of the methodology are provided elsewhere ( 11 ). The Minnesota Department of Human Services Institutional Review Board approved the study, and all participants consented to participate.

Of the 2,500 eligible enrollees, 1,118 (45 percent) completed the survey, 243 (10 percent) refused, 975 (39 percent) were not contacted, and 164 (7 percent) did not participate because of a language barrier, illness, or death. Surveys completed by a proxy were eliminated. The sample for analysis consisted of 1,012 individuals.

Self-reported service use was assessed with the question: "People can get counseling, treatment, or medicine for many different reasons, such as for feeling depressed, anxious, or `stressed out'; personal problems (like when a loved one dies or when there are problems at work); family problems (like marriage problems or when parents and children have trouble getting along); needing help with drug or alcohol use; and for mental or emotional illness. In the last 12 months, did you get counseling, treatment, or medicine for any of these reasons?"

We assessed the accuracy of self-reported use with the percentage of enrollees who did not report treatment (false negatives). We then evaluated whether failure to report treatment varied by gender, age, race or ethnicity, education, residence, and diagnosis based on ICD-9 codes in Medicaid encounter records (schizophrenia, affective disorders, anxiety disorders, other disorders, and combinations).

Results

The sample included many more women (86 percent) than men (14 percent). Almost one-third (32 percent) of the respondents were 25 to 34 years old, 15 percent were 18 to 24, 31 percent were 35 to 44, 11 percent were 45 to 54, and 10 percent were 55 or older. The sample was predominantly white (74 percent), and 4 percent were black, 4 percent were Asian, 3 percent were American Indian, 4 percent were Hispanic, and 6 percent were of mixed or unknown race. (Respondents were categorized as Hispanic if they reported that ethnicity, regardless of racial designation.)

Only 2 percent of respondents were classified as diagnosed as having schizophrenia, which reflects the exclusion of individuals with severe disabilities from managed care in Minnesota. Patients with affective disorders formed 34 percent of the sample, whereas 13 percent had diagnoses of anxiety disorders. Just over a third (35 percent) were diagnosed as having a combination of disorders, and the remaining 15 percent as having some other disorder.

Of 1,012 respondents, 147 (15 percent) said they had not received behavioral health services, although administrative data indicated otherwise. As shown in Table 1 , failure to report treatment was associated with gender, age, education, and diagnosis but not with race or ethnicity or with residence.

Table 1 Underreporting on a survey of use of behavioral health treatment among Medicaid managed care enrollees (N=1,012), by demographic and clinical characteristics
Table 1 Underreporting on a survey of use of behavioral health treatment among Medicaid managed care enrollees (N=1,012), by demographic and clinical characteristics
Enlarge table

A series of logistic regressions were used to isolate the independent contribution of each of the six classification variables and gauging which categories drove the associations. The regression used stepwise, forward, and backward elimination with all of the main effects listed in Table 1 and all two-way interactions of those variables. All three selection procedures indicated the same final model, with two main effects (age and education) and no interactions. Respondents aged 35 to 44 and aged 45 to 54 were less likely (odds ratio [OR]=.49, 95 percent confidence interval [CI]= .28-.87; OR=.36, CI=.16-.81) to underreport than those aged 18 to 24 (the reference group). Those with at least a college degree were more likely (OR=2.46, CI=1.28-4.71) to underreport service use than those without a high school diploma (the reference group).

Discussion

Investigations that focus on the use of behavioral health services rely heavily on self-reports. In this study of Medicaid managed care enrollees, most survey respondents accurately self-reported service receipt. Although not inconsequential, only a minority of respondents (147 of 1,012, or 15 percent), who were selected on the basis of a record of behavioral health diagnosis and service receipt, denied receiving services. The discrepancy between the administrative data and self-reports may be due to differing definitions of service used in the selection of participants and in the questionnaire, to administrative coding errors, to inaccurate recall (perhaps related to organic brain disorder), to reluctance to report sensitive information, or to all of these reasons. Nonetheless, our findings should make researchers reasonably confident that under certain conditions (such as with a self-administered confidential survey) Medicaid enrollees report their behavioral service use fairly accurately. The results comport nicely with those observed in other recent investigations ( 12 , 13 ).

Although the overall level of agreement between the two sources was good, the differences between administrative and self-reported behavioral service use were not random. In the bivariate analysis, underreporting was related to gender, age, education, and diagnosis, but controlling for age and education reduced the effects of gender and diagnosis to insignificance. The fact that older and more highly educated enrollees provided the highest levels of false negatives is somewhat surprising, because these groups, particularly the more highly educated, have been among the more willing survey responders ( 14 , 15 ). In any event, the finding that underreporting was not random suggests that reporting errors could introduce bias when comparing groups that differ on age or education.

In considering these findings, it is important to note several potentially important limitations. The sample focused on a largely female (86 percent) Medicaid population and excluded those with serious and persistent mental illness—a group that may consume the largest amounts of behavioral health services—which raises questions about the generalizability of our findings. Another limitation is the inability to address the issue of false positives. It is possible that the accuracy rate among non-service users was lower than among service users.

Conclusions

Behavioral health services researchers often rely on self-reports of use of psychiatric and substance abuse services. This study has shown self-reports of behavioral health services to be a relatively accurate method of obtaining such information. As such, reliance on self-reports would seem justified in future investigations, especially because self-reports are relatively straightforward and inexpensive compared with medical record extraction.

Acknowledgments

This study was supported by funding provided by the Minnesota Department of Human Services. These views represent the opinions of the authors and not necessarily the supporting agency. The authors thank Jeff Tenney, M.P.H., M.A., and Solveig Bentson for their work in pulling the sample. We also thank Jim Shaul, M.H.A., and Brian Clarridge, Ph.D., for data collection.

Dr. Beebe is affiliated with the Survey Research Center, Department of Health Sciences Research, Mayo Clinic College of Medicine, 200 First Street, S.W., Rochester, MN 55905 (e-mail: [email protected]). Dr. McRae is with Performance Measurement and Quality Improvement, Minnesota Department of Human Services, St. Paul. Dr. Barnes is now with Kinetic Concepts, Inc., San Antonio, Texas.

References

1. McAllister I, Makkai T: Correcting for the under-reporting of drug use in opinion surveys. International Journal of the Addictions 26:945-961, 1991Google Scholar

2. Turner CF, Lessler JT, Devore J: Effects of mode of administration and wording on data quality, in Survey Measurement of Drug Use: Methodological Studies. Edited by Turner CT, Lessler JT, Gfroerer JC. Washington, DC, Government Printing Office, 1992Google Scholar

3. Aquilino WS: Privacy effects on self-reported drug use: interactions with survey mode and respondent characteristics, in The Validity of Self-reported Drug Use: Improving the Accuracy of Survey Estimates. Monograph 167. Edited by Harrison L, Hughes A. Washington, DC, National Institute on Drug Abuse, 1997Google Scholar

4. Aquilino WS, Wright D, Supple A: Response effects due to bystander presence in CASI and paper-and-pencil surveys of drug use and alcohol use. Substance Use and Misuse 35:845-867, 2000Google Scholar

5. Beebe TJ, Harrison PA, McRae JA, et al: An evaluation of computer-assisted self-interviews in a school setting. Public Opinion Quarterly 62:623-632, 1998Google Scholar

6. Harvard School of Medicine: National Comorbidity Survey. Available at www.hcp.med.harvard.edu/ncs/index.php. Accessed Mar 10, 2006Google Scholar

7. US Department of Health and Human Services: National Survey on Drug Use and Health. Available at www.oas.samhsa.gov/nhsda.htm. Accessed Mar 10, 2006Google Scholar

8. Centers for Disease Control and Prevention: National Health Interview Survey. Available at www.cdc.gov/nchs/nhis.htm. Accessed Mar 10, 2006Google Scholar

9. Eisen SV, Shaul JA, Leff HS, et al: Toward a national consumer survey: evaluation of the CABHS and MHSIP instruments. Journal of Behavioral Health Services Research 28:347-369, 2001Google Scholar

10. Experience of Care and Health Outcomes Survey (ECHO) Recommended Sampling and Administration Methodology. Available at www.hcp.med.harvard.edu/echo/ECHO.protocol.sampling.version.3.0.pdfGoogle Scholar

11. Beebe TJ, Harrison PA, McRae JA, et al: Evaluating behavioral health services in Minnesota's Medicaid population using the Experiences of Care and Health Outcomes (ECHO) survey. Journal of Health Care for the Poor and Underserved 14:608-621, 2003Google Scholar

12. Beach SR, Schlarb J, Musa D, et al: Accuracy of reports of behavioral health service use among public assistance HMO members: results from a record check study. Presented at the annual meeting of the American Association for Public Opinion Research, Miami, Fla, May 12-15, 2005Google Scholar

13. Killeen TK, Brady KT, Gold PB, et al: Comparison of self-report versus agency records of service utilization in a community sample of individuals with alcohol use disorders. Drug and Alcohol Dependence 73:141-147, 2003Google Scholar

14. Fowler FJ: Survey Research Methods: Applied Social Research Methods Series, vol 1. Newbury Park, Calif, Sage, 1988Google Scholar

15. Salant P, Dillman DA: How to Conduct Your Own Survey. New York, Wiley, 1994Google Scholar