The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×

Abstract

Objective:

In 2008, Massachusetts Medicaid implemented a pediatric behavioral health (BH) screening mandate. This study conducted a population-level, longitudinal policy analysis to determine the impact of the policy on ambulatory, emergency, and inpatient BH care in comparison with use of these services in California, where no similar policy exists.

Methods:

With Medicaid Analytic Extract (MAX) data, an interrupted time-series analysis with control series design was performed to assess changes in service utilization in the 18 months (January 2008–June 2009) after a BH screening policy was implemented in Massachusetts and to compare service utilization with California’s. Outcomes included population rates of BH screening, BH-related outpatient visits, BH-related emergency department visits, BH-related hospitalizations, and psychotropic drug use. Medicaid-eligible children from January 1, 2006, to December 31, 2009, with at least ten months of Medicaid eligibility who were older than 4.5 years and younger than 18 years were included.

Results:

Compared with rates in California, Massachusetts rates of BH screening and BH-related outpatient visits rose significantly after Massachusetts implemented its screening policy. BH screening rose about 13 per 1,000 youths per month during the first nine months, and BH-related outpatient visits rose to about 4.5 per 1,000 youths per month (p<.001). Although BH-related emergency department visits, hospitalization and psychotropic drug use increased, there was no difference between the states in rate of increase.

Conclusions:

The goal of BH screening is to identify previously unidentified children with BH issues and provide earlier treatment options. The short-term outcomes of the Massachusetts policy suggest that screening at preventive care visits led to more BH-related outpatient visits among vulnerable children.

Child behavioral health (BH) conditions frequently are seen in pediatric primary care. In fact, primary care providers are children’s default source of mental health care (1). As a consequence, BH screening is recommended by national organizations, and it is increasingly covered by public and private insurers to increase identification of BH issues and enhance families’ engagement in needed services (2,3). However, evidence of the impact of screening on subsequent BH care is lacking. In particular, there are no studies comparing states with policies requiring screening to states without similar mandates. A legal settlement in Massachusetts, mandating screening of children covered by the state’s Medicaid program, provided an opportunity to study the policy impact of screening mandates on rates of subsequent use of BH services at the population level.

The Massachusetts mandate resulted from a class action lawsuit, Rosie D. versus Patrick (4). In 2006, a U.S. District Court found Massachusetts out of compliance with federal Medicaid law (5). The resulting remedial plan required Massachusetts to provide screening and follow-up to Medicaid-eligible children in what has become one of the largest child mental health system changes in the nation (6). As the first phase of the plan, primary care providers were required, starting in January 1, 2008, to screen for BH at well-child visits for all children and youths through age 21. It was not until July 2009 that additional mental health services were implemented as part of the plan, thus providing an 18-month period in which to study the impact of mandated BH screening as a stand-alone intervention (7).

Prior studies of child BH screening have found that screening leads to an increase in outpatient BH visits, although outcomes have been mixed and vary across health systems and diagnoses (8,9). Several studies of the Massachusetts screening mandate in particular found that claims for BH screening and evaluations rose after BH screening began (10,11) and that two factors—identification of a new BH problem at a screening visit and being in foster care at the time of screening—independently predicted subsequent receipt of a BH visit (12). However, these studies did not control for the potential impact of regional or national trends in the use of children’s BH services (increasing nationwide) (13,14) or for ongoing efforts to improve children’s access to BH services (9,15,16). The studies also did not compare trends in Massachusetts with trends in any other states. In this study, we examined the impact of the screening mandated under Rosie D by using an interrupted time-series design—a strong quasi-experimental approach for assessing changes in service utilization—along with longitudinal data from another state for comparison. Our goal was to determine whether a policy of mandated BH screening increased outpatient, emergency, and inpatient BH care utilization rates for Medicaid children in Massachusetts at a population level.

Methods

This study used Medicaid Analytic Extract (MAX) claims data from Massachusetts and California to compare utilization of BH services (screening, outpatient, emergency department [ED], inpatient, and medication) after initiation of a BH screening mandate for children. California was chosen as the comparison state because it has a large diverse and stable Medicaid population and had no competing intervention. We initially considered New York, a more proximate state, for comparison, but during the analysis we discovered that it had also initiated a BH policy and that it was similar to the Massachusetts policy. An interrupted time-series with control series design was utilized.

BH Screening Policy in Massachusetts Under Rosie D

The Massachusetts policy requires primary care providers to conduct BH screening at well-child visits. Providers must select screens from a menu of validated tools (17) and are compensated if they submit claims using a specific billing code. The state monitors screening rates for Medicaid enrollees and provides feedback to pediatric practices.

Data

Data were obtained for all Medicaid-eligible children for the period January 1, 2006, to December 31, 2009, in two states, Massachusetts and California. The data included demographic and enrollment data (personal summary file), hospital inpatient claims (inpatient file), outpatient claims (other file), and pharmacy claims (Rx file). We used the encrypted Medicaid Statistical Information System identification number to identify individuals. The Massachusetts BH screening policy went into effect January 1, 2008, providing 24 months of data prior to the policy implementation and 18 months of postpolicy data prior to the implementation of additional BH program components in fall 2009.

This study was approved by the institutional review board at Cambridge Health Alliance and Group Health Cooperative.

Population

The population of interest in this study was children of “screenable” school age. We limited our sample to children older than 4.5 years and younger than 18 years at the time of screening because most providers used the same BH screen (18)—the Pediatric Symptom Checklist and the Youth Pediatric Symptom Checklist (19)—which covers this age range. We included all BH claims for youths up to age 20 in order to capture utilization that occurred after a screening encounter at 17.99 years of age (in other words, to capture BH utilization in the “run-out” period).

Inclusion Criteria

For inclusion, all individuals required at least ten months of Medicaid eligibility in each calendar year to ensure that most utilization was considered. However, the cohort of youths included was “rolling” in the sense that individuals could enter and exit the study cohort in a given year.

Population Standardization

Because we used a rolling cohort of youths, change in the composition of the patient population was a threat to the validity of the interrupted time-series design (20). For example, changes in the distribution of sex or age over study years could confound any change in BH utilization rates. We examined the demographic and enrollment characteristics of the population in each year and found differences in managed care enrollment, eligibility status (including foster care and poverty), race, and Hispanic ethnicity. Thus we standardized the population of youths on these characteristics to the distribution in January 2008, the date that the screening policy was implemented. It is also important to note that there were no sudden changes in level or slope of demographic characteristics that could confound observed effects. Because the analysis of trends was within populations, there was no need to standardize populations between states.

Outcome Variables

Definitions of utilization outcomes have been used in prior research (12,21) and were met if a minimum of one visit per utilization type (categories 1–5) per day occurred. Outcomes included BH screening, defined by any outpatient visit with a 96110 Current Procedural Terminology code (developmental testing); BH-related outpatient visits, defined as psychiatric services (such as diagnostic interviews, psychopharmacology management, and psychotherapy), health behavioral assessment and intervention services, visits to other mental health professionals, the Massachusetts Behavioral Health Partnership codes used to track services not otherwise identified with existing codes (crisis intervention, family counseling, and case management), and well-child visits or ambulatory visits that had an associated BH-related ICD-9 code (290–319); BH-related ED visits, defined by the place-of-service code on records in the “other” MAX data file, and mental health–related ED visits that were identified by an associated BH-related ICD-9 code; BH-related hospitalizations, defined as inpatient stays with a BH-related ICD-9 code; and psychotropic medication use, defined by National Drug Codes (NDCs) included on the Mental Health Research Network list.

The medication categories included attention-deficit disorder–other (nonstimulant medications), antidepressants, antianxiety-other (nonbenzodiazepines), anticonvulsants, antipsychotic–first generation, antipsychotic–second generation, benzodiazepines, “combo” (all combination psychotropic medications), hypnotic-other (such as zolpidem), lithium, and stimulants. A full list of study medications and NDCs is available on request. Drugs with possible dual use were excluded, such as antidepressants used primarily for migraines and enuresis among children (imipramine and amitryptiline), antidepressants used for sleep (doxepin and trazodone) when no other psychiatric medication was being used and there was no BH ICD-9 code, and anticonvulsants unless accompanied by any BH ICD-9 code. For example, if a patient had a diagnosis of bipolar disorder on any prior visit and also used an anticonvulsant, that patient was included for use of psychopharmacology.

Calculation of Utilization Rates

We calculated monthly population utilization rates and adjusted for managed care enrollment, eligibility status (such as foster care and poverty), race, and Hispanic ethnicity as described above. The numerator for rates was the presence of a claim for service in the calendar month for one of the diagnostic or procedure codes listed above. The denominator for rates was all youths eligible to receive services (in other words, they were currently insured by Medicaid) and of screenable age in the same calendar month. That is, we did not remove youths from the denominator in subsequent study months when they received BH screening in prior study months.

We further adjusted the population utilization rates for seasonality using the Census Bureau algorithm (proc X-11 in SAS) (22,23).

Interrupted Time-Series Analysis

We fit segmented regression models (2426) for each of the BH utilization rates in the 24 months prior to mandatory screening and 18 months postmandate. The segmented regression models included terms for the change in intercept (immediate-level change), secular trend (overall slope), and change in trend (increase versus decrease in slope postimplementation). We then constructed difference-in-differences models comparing Massachusetts with California to determine whether there were significant changes in outcomes related to the policy.

Results

The demographic characteristics of the Massachusetts and California populations are shown in Table 1. The two states were fairly similar with regard to age and gender distributions but differed substantially by race and eligibility criteria. However, these differences did not affect the almost identical baseline trends in outcome measures.

TABLE 1. Characteristics of eligible Massachusetts and California youthsa

MassachusettsCalifornia
Characteristic2006 (N=258,751)2007 (N=262,742)2008 (N=282,050)2009 (N=287,984)All (N=1,091,527)2006 (N=2,025,433)2007 (N=2,013,966)2008 (N=2,058,385)2009 (N=2,192,551)All (N=8,290,365)
Age
 4–616.116.115.715.916.015.816.016.016.216.0
 7–921.020.920.721.020.920.319.920.120.220.1
 10–1221.220.519.619.920.320.419.919.419.119.7
 13–1522.221.720.720.121.120.320.320.119.820.1
 16–1818.219.320.320.019.520.220.721.121.420.9
 191.41.63.03.12.33.13.23.33.43.3
Sex
 Female49.349.249.149.249.252.852.852.752.452.7
 Male50.750.850.950.850.847.247.247.347.647.3
Race-ethnicity
 White44.243.442.238.341.917.917.417.116.717.2
 Black14.314.313.912.713.812.312.011.711.011.7
 American Indian or Alaska Native.3.3.3.2.3.5.5.5.5.5
 Asian4.54.54.44.34.45.05.14.94.64.9
 Hispanic24.225.525.225.925.259.560.461.262.360.9
 Hawaiian or Pacific Islander000003.02.62.52.52.7
 Multiple.4.1.21.1.500000
 Unknown12.211.813.917.513.91.91.92.12.42.1
Eligibility
 Blind or other physical disability8.28.58.18.48.35.05.15.15.05.1
 Child13.012.411.313.612.658.156.956.957.757.4
 Child poverty65.764.963.461.963.92.93.33.84.13.5
 Child other7.48.08.06.07.312.513.012.411.412.3
 Foster care.2.1.2.1.14.94.94.84.54.8
 Child demonstrationb000002.12.01.91.82.0
 Unknown5.66.19.210.07.614.614.715.115.515.0
Any private insurance14.214.515.415.615.06.16.36.77.56.7
Managed care combo
 Comprehensive plan only48.449.549.349.649.21.82.02.32.72.2
 PCCM and BHc27.926.125.324.325.800000
 Fee for service0000016.716.816.416.016.5
 Unknown23.824.325.426.125.081.581.281.381.381.3

aValues are percentages. Individuals may be counted in multiple years.

bCalifornia children eligible as part of the state’s short-term special projects

cPCCM, primary care clinician program; BH, behavioral health

TABLE 1. Characteristics of eligible Massachusetts and California youthsa

Enlarge table

After reviewing the time-series analyses, we observed that the overall monthly rate of screening stabilized at nine months postmandate. Preliminary data from MassHealth also indicated that achieving a 50% BH screening rate at well-child visits took nine months. Thus we censored the first eight months of observations in the time-series analyses in order to evaluate the impact of the policy when it achieved at least “half strength.” Such censoring has been applied in several previous studies where the analysis excluded the implementation or “phase-in” period (2730).

Figure 1 shows the rate of BH screening among all eligible youths (including those without ambulatory visits). Use of the screening code rose to about 13 per 1,000 youths per month during the first nine months after the screening mandate was implemented. The rate of BH screening remained relatively stable over the next ten months, averaging about 11 per 1,000 youths per month. During the same time frame, there was no concomitant rise in screening in the California data.

FIGURE 1.

FIGURE 1. Rates of behavioral health screening (CPT 96110) in Massachusetts and California before and after the 2008 Massachusetts mandate

Figure 2 shows the monthly rate of any BH-related outpatient service utilization. The adjusted rate was about 35 per 1,000 youths per month in the two years prior to the mandate, with a slightly increasing secular trend (.8 per 1,000, p=.03). After the phase-in period for screening, rates of outpatient use increased dramatically in the fall of 2008, to about 50 per 1,000 youths per month. More specifically, it began to rise during the phase-in period and then remained stable thereafter. The trend in utilization increased at about 4.5 per 1,000 youths per month (p<.001). No increase was seen in California. Estimates for policy (immediate change), time (secular trend), and time after the mandate (postmandate trend) resulting from the regression models are shown in Table 2.

FIGURE 2.

FIGURE 2. Rates of any outpatient behavioral health care utilization in Massachusetts and California before and after the 2008 Massachusetts mandate

TABLE 2. Segmented regression model results by behavioral health care utilization type for youths in Massachusetts and California

VariableEstimateSEtaApprox Pr>|t|
Rate of any outpatient utilization
 Intercept31.361.6419.11<.001
 Policy10.501.0410.09<.001
 Time.08.032.33.03
 Time after.45.133.51.001
Rate of emergency department use with a behavioral health diagnosis
 Intercept15.98.1889.21<.001
 Policy–1.83.40–4.53<.001
 Time.03.009.03<.001
 Time after.48.076.93<.001
Rate of inpatient stays for behavioral diagnoses
 Intercept.25.054.74<.001
 Policy–.05.03–1.51.14
 Time.00.00.95.35
 Time after.01.003.13.004
Rate of psychotropic medication utilization
 Intercept138.584.3831.62<.001
 Policy–.662.78–.24.82
 Time–.22.09–2.49.02
 Time after.97.352.81.01

adf=1

TABLE 2. Segmented regression model results by behavioral health care utilization type for youths in Massachusetts and California

Enlarge table

Figure 3 shows the rate of BH-related ED visits over the study period. BH-related ED use averaged about 17 per 1,000 youths per month in the premandate period. Once the mandate took effect, rates of ED use increased at a rate of 4.8 per thousand per month (p<.001). However, the difference-in-differences analysis revealed that a similar increase occurred in California during the same period.

FIGURE 3.

FIGURE 3. Rates of behavioral health–related emergency department visits in Massachusetts and California before and after the 2008 Massachusetts mandate

BH-related hospitalizations averaged about .3 per 1,000 youths per month in the premandate period. Beginning in the fall of 2008, inpatient stays with a BH diagnosis began to increase by about .1 per 1,000 youths per month, with an expected rate of about .4 per 1,000 by June 2009. The difference-in-differences model did not show any statistically significant change in inpatient stays in Massachusetts compared with California. The monthly rate of any psychotropic medication use was about 126 per 1,000 in the premandate period, with a decreasing secular trend of 2.2 per 1,000 per month (p=.02). After the implementation period, the trend reversed, with the rate increasing at about 9.7 per 1,000 per month compared with the California control (p=.01). Again, the difference-in-differences interrupted time-series analysis did not show statistically significant change for psychotropic medication use.

Discussion

The Massachusetts policy mandate to screen all children and youths for BH problems is the first of its kind in the nation (21). In examining the policy’s impact on service utilization rates compared with those of California, we found that screening (use of the 96110 code) significantly increased in Massachusetts and was associated with a concomitant increase in rates of BH-related outpatient services. However, when assessed in comparison with trends in California, we found that BH hospitalizations, BH-related ED utilization, and psychotropic medication use in Massachusetts showed no evidence of being influenced by the new policy.

It is important to put our findings into context. During the study period, BH-related ED visits for children rose (31,32). Evidence also suggests that psychotropic medication use and outpatient mental health treatment for children increased between the periods 1996–1998 and 2010–2012 (33). Inpatient hospitalizations related to BH conditions for children also rose between 2006 and 2011 (32) Other changes occurred in the delivery of child and adolescent mental health services, including the black-box warning on the use of antidepressants (30), rising rates of specific disorders (specifically, bipolar disorder) (34), and heightened interest in identifying and treating BH issues in pediatric offices (35). However, none were concurrent with the changes we noted postpolicy in Massachusetts.

The increase in screening rates after the mandate has been documented elsewhere (10,11), but the investigators for the previous study did not control for external factors, nor did they compare trends in other states. It is possible that we missed screening conducted without the recommended code; however, a prepolicy Massachusetts Medicaid chart review found that formal screening tools were used in only 4% of well-child visits (36). In the comparison state, California, screening may have occurred as part of Early and Periodic Screening, Diagnosis and Treatment visits without specific reimbursement for identified screening codes, but trends did not change over time. Our analysis provides evidence that mandates—coupled with reimbursement—can rapidly increase BH screening rates in pediatric practice and increase the utilization rates of outpatient BH services.

Several questions remain unanswered. First, does the increased BH utilization translate to better or more appropriate care? Further investigation is needed to ascertain treatment quality and outcomes. Second, does early entry into treatment ultimately mitigate the need for more expensive care, such as ED visits and hospitalizations? Although we did not see a significant change in these outcomes, our time frame for evaluation was limited because Massachusetts added programs after 18 months, which would confound any longer-term analysis of outcomes related strictly to the screening policy. Further work is needed to determine the relationship between screening, treatment, and inpatient utilization.

Several other findings merit discussion. For example, despite an increase in outpatient care rates, we saw no concomitant increase in psychotropic medication utilization rates. This finding is consistent with prior studies of screening programs (21,37). It is important to note that BH-related outpatient services include visits delivered by BH professionals as well as primary care clinicians. It is possible that clinicians initially used first-line–recommended “talk therapy” options (2,3841) rather than psychopharmacologic interventions or that visits were likely to be for BH issues encountered in primary care (adjustment reactions, depression, for example) rather than more severe mental illness. Prior research indicates that newly identified children were more likely to have internalizing than externalizing symptoms (8). This finding may also reflect recent concern regarding overmedication of children and Food and Drug Administration black-box warnings among pediatricians (30). Further research would be needed to examine specific diagnoses and whether psychotropic medication would have been indicated.

In addition, although both states noted an increase in BH-related ED visit rates, we did not see a significant difference between the states. This is contrary to a recent study of a single system in Massachusetts where a combination of screening and colocated primary and BH care led to an increase in BH-related ED visit rates (42). However, that study did not use a comparison state and focused on the impact of screening and colocated care, which may have contributed to increased ED use for reasons cited in the article.

Our study had a number of limitations. First, we lacked information on BH need and clinical outcomes at the population level. The use of an interrupted time-series approach assumed that no cointerventions occurred simultaneously with the policy of interest. However, our discussions with Massachusetts program staff and investigation of relevant documents did not identify any other policy changes in the same period. In addition, it is important to note that this study did not examine the impact of the policy on individual children identified as having BH issues or on children who received screening. It was intended to examine the population-level impact of the screening policy on utilization rates. Our interrupted time-series design explicitly controlled for secular trends in two states and was thus a strong quasi-experimental design for examining the impact of policy on rates of service utilization (43).

Conclusions

The goal of BH screening in primary care is to identify previously unidentified children with BH issues and provide earlier treatment options. As in other screening programs, the hope is to ultimately deter longer-term negative outcomes. Although we could not measure the long-term outcomes (recovery rates) of the Massachusetts policy, the policy increased screening at preventive care visits. The increase in screening, in turn, was associated with an increase in use of outpatient BH care in the Medicaid-insured population of youths in Massachusetts.

Dr. Hacker is with the Allegheny County Health Department and with the Graduate School of Public Health, University of Pittsburgh, Pittsburgh (e-mail: ). Dr. Penfold is with the Department of Health Services Research, Group Health Research Institute, Seattle. Dr Arsenault is with the Institute for Community Health, Cambridge, Massachusetts, and Harvard Medical School, Boston. Dr. Zhang and Dr. Soumerai are with the Department of Population Medicine, Harvard Medical School, and with Harvard Pilgrim Healthcare Institute, Boston. Dr. Wissow is with the Department of Health, Behavior and Society, Johns Hopkins Bloomberg School of Public Health, Baltimore.

All phases of this study were supported by grant R21MH094942 and grant U19MH092201 from the National Institute of Mental Health.

Dr. Penfold has received research funding through his institution from Novartis Pharmaceuticals. The other authors report no financial relationships with commercial interests.

The authors acknowledge Chester J. Pabiniak, M.S., for his analytic contribution.

References

1 Tolan PH, Dodge KA: Children’s mental health as a primary care and concern: a system for comprehensive support and service. American Psychologist 60:601–614, 2005Crossref, MedlineGoogle Scholar

2 Foy JM; American Academy of Pediatrics Task Force on Mental Health: Enhancing pediatric mental health care: report from the American Academy of Pediatrics Task Force on Mental Health. Introduction. Pediatrics 125(suppl 3):S69–S74, 2010Crossref, MedlineGoogle Scholar

3 New Freedom Commission on Mental Health: Achieving the Promise: Transforming Mental Health Care in America. Final Report. DHHS pub no SMA-03-3832. Rockville, Md, US Department of Health and Human Services, President’s New Freedom Commission on Mental Health, 2003Google Scholar

4 Center for Public Representation: Rosie D: Reforming the Health Care System in Massachusetts. The Remedy: The Pathway to Home-Based Services. Northampton, Mass, Center for Public Representation, 2007. http://www.rosied.org/Default.aspx?pageId=84564Google Scholar

5 Hanlon P: Rosie D lawsuit prompts mandatory mental health screening in Massachusetts. New England Psychologist, 2008. http://www.nepsy.com/leading/0803_ne_rosie.htmlGoogle Scholar

6 Home and Community Based Behavioral Health Services for Children and Families. Boston, Executive Office of Health and Human Services, 2015. http://www.mass.gov/eohhs/gov/commissions-and-initiatives/cbhi/home-and-community-based-behavioral-health-srvcs.htmlGoogle Scholar

7 Children’s Behavioral Health Initiative. Boston, Executive Office of Health and Human Services, 2015. www.mass.gov/masshealth/cbhiGoogle Scholar

8 Hacker K, Arsenault L, Franco I, et al.: Referral and follow-up after mental health screening in commercially insured adolescents. Journal of Adolescent Health 55:17–23, 2014Crossref, MedlineGoogle Scholar

9 Wissow LS, Brown J, Fothergill KE, et al.: Universal mental health screening in pediatric primary care: a systematic review. Journal of the American Academy of Child and Adolescent Psychiatry 52:1134–1147, 2013Crossref, MedlineGoogle Scholar

10 Behavioral Health Screening Jan 2008–June 2014. Boston, Executive Office of Health and Human Services, 2014. http://www.mass.gov/eohhs/gov/commissions-and-initiatives/cbhi/cbhi-publications-and-reports.htmlGoogle Scholar

11 Kuhlthau K, Jellinek M, White G, et al.: Increases in behavioral health screening in pediatric care for Massachusetts Medicaid patients. Archives of Pediatrics and Adolescent Medicine 165:660–664, 2011Crossref, MedlineGoogle Scholar

12 Hacker KA, Penfold R, Arsenault L, et al.: Screening for behavioral health issues in children enrolled in Massachusetts Medicaid. Pediatrics 133:46–54, 2014Crossref, MedlineGoogle Scholar

13 Olfson M, Blanco C, Liu L, et al.: National trends in the outpatient treatment of children and adolescents with antipsychotic drugs. Archives of General Psychiatry 63:679–685, 2006Crossref, MedlineGoogle Scholar

14 Olfson M, Blanco C, Wang S, et al.: National trends in the mental health care of children, adolescents, and adults by office-based physicians. JAMA Psychiatry 71:81–90, 2014Crossref, MedlineGoogle Scholar

15 Sarvet B, Gold J, Bostic JQ, et al.: Improving access to mental health care for children: the Massachusetts Child Psychiatry Access Project. Pediatrics 126:1191–1200, 2010Crossref, MedlineGoogle Scholar

16 Straus JH, Sarvet B: Behavioral health care for children: the Massachusetts Child Psychiatry Access Project. Health Affairs 33:2153–2161, 2014CrossrefGoogle Scholar

17 The MassHealth-Approved Screening Tools, 2012. Boston, Executive Office of Health and Human Services. http://www.mass.gov/eohhs/gov/commissions-and-initiatives/cbhi/screening-for-behavioral-health-conditions/behavioral-health-screening-tools/chart-of-masshealth-approved-screening-tools.htmlGoogle Scholar

18 Savageau J, Simons J, Willis G, et al: Universal Behavioral Health Screening in Massachusetts Children on Medicaid: Preliminary Assessment. Presented at the Children’s Mental Health Research and Policy Conference, Tampa, March 2–5, 2014Google Scholar

19 Pediatric Symptom Checklist. Boston, Massachusetts General Hospital, Department of Psychiatry, 2012. http://www.massgeneral.org/psychiatry/services/psc_home.aspxGoogle Scholar

20 Shadish W, Cook T, Campbell D: Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Boston, Houghton Mifflin, 2002Google Scholar

21 Hacker KA, Penfold RB, Arsenault LN, et al.: Behavioral health services following implementation of screening in Massachusetts Medicaid children. Pediatrics 134:737–746, 2014Crossref, MedlineGoogle Scholar

22 Bobbit LG, Otto MC: Effects of Forecasts on the Revisions of Seasonally Adjusted Data Using the X-11 Adjustment Procedure; in Proceedings of the Business and Economic Statistics Section of the American Statistical Association. Alexandria, Va, American Statistical Association, 1990Google Scholar

23 Buszuwski JA: Alternative ARIMA Forecasting Horizons When Seasonally Adjusting Producer Price Data with X-11-ARIMA; in Proceedings of the Business and Economic Statistics Section of the American Statistical Association. Alexandria, Va, American Statistical Association, 1987Google Scholar

24 Gillings D, Makuc D, Siegel E: Analysis of interrupted time series mortality trends: an example to evaluate regionalized perinatal care. American Journal of Public Health 71:38–46, 1981Crossref, MedlineGoogle Scholar

25 Wagner AK, Soumerai SB, Zhang F, et al.: Segmented regression analysis of interrupted time series studies in medication use research. Journal of Clinical Pharmacy and Therapeutics 27:299–309, 2002Crossref, MedlineGoogle Scholar

26 Penfold RB, Zhang F: Use of interrupted time series analysis in evaluating health care quality improvements. Academic Pediatrics 13(suppl):S38–S44, 2013Crossref, MedlineGoogle Scholar

27 Adams AS, Zhang F, LeCates RF, et al.: Prior authorization for antidepressants in Medicaid: effects among disabled dual enrollees. Archives of Internal Medicine 169:750–756, 2009Crossref, MedlineGoogle Scholar

28 Lu CY, Soumerai SB, Ross-Degnan D, et al.: Unintended impacts of a Medicaid prior authorization policy on access to medications for bipolar illness. Medical Care 48:4–9, 2010Crossref, MedlineGoogle Scholar

29 Burns ME, Busch AB, Madden JM, et al.: Effects of Medicare Part D on guideline-concordant pharmacotherapy for bipolar I disorder among dual beneficiaries. Psychiatric Services 65:323–329, 2014LinkGoogle Scholar

30 Lu CY, Zhang F, Lakoma MD, et al.: Changes in antidepressant use by young people and suicidal behavior after FDA warnings and media coverage: quasi-experimental study. BMJ 348:g3596, 2014Crossref, MedlineGoogle Scholar

31 Pittsenbarger ZE, Mannix R: Trends in pediatric visits to the emergency department for psychiatric illnesses. Academic Emergency Medicine 21:25–30, 2014Crossref, MedlineGoogle Scholar

32 Torio CM, Encinosa W, Berdahl T, et al.: Annual report on health care for children and youth in the United States: national estimates of cost, utilization and expenditures for children with mental health conditions. Academic Pediatrics 15:19–35, 2015Crossref, MedlineGoogle Scholar

33 Olfson M, Druss BG, Marcus SC: Trends in mental health care among children and adolescents. New England Journal of Medicine 372:2029–2038, 2015Crossref, MedlineGoogle Scholar

34 Moreno C, Laje G, Blanco C, et al.: National trends in the outpatient diagnosis and treatment of bipolar disorder in youth. Archives of General Psychiatry 64:1032–1039, 2007Crossref, MedlineGoogle Scholar

35 Kolko DJ, Perrin E: The integration of behavioral health interventions in children’s health care: services, science, and suggestions. Journal of Clinical Child and Adolescent Psychology 43:216–228, 2014Crossref, MedlineGoogle Scholar

36 Savageau J, Cabral L, Gettens J, et al.: Clinical Topic Review: Behavioral Health Screening for Children With Well Visits. Worcester, Mass, UMass Medical School, Center for Health Policy and Research, Commonwealth Medicine, 2009Google Scholar

37 Hacker KA, Penfold RB, Arsenault LN, et al.: Effect of pediatric behavioral health screening and colocated services on ambulatory and inpatient utilization. Psychiatric Services 66:1141–1148, 2015LinkGoogle Scholar

38 Wolraich M, Brown L, Brown RT, et al.: ADHD: clinical practice guideline for the diagnosis, evaluation, and treatment of attention-deficit/hyperactivity disorder in children and adolescents. Pediatrics 128:1007–1022, 2011Crossref, MedlineGoogle Scholar

39 Findling RL, Drury S, Jensen PS, et al: Practice Parameter for the Use of Atypical Antipsychotic Medications in Children and Adolescents. American Academy of Child and Adolescent Psychiatry Practice Parameters. Washington, DC, American Academy of Child and Adolescent Psychiatry, 2011. http://www.aacap.org/App_Themes/AACAP/docs/practice_parameters/Atypical_Antipsychotic_Medications_Web.pdfGoogle Scholar

40 Birmaher B, Brent D, Bernet W, et al.: Practice parameter for the assessment and treatment of children and adolescents with depressive disorders. Journal of the American Academy of Child and Adolescent Psychiatry 46:1503–1526, 2007Crossref, MedlineGoogle Scholar

41 Connolly SD, Bernstein GA; Work Group on Quality Issues: Practice parameter for the assessment and treatment of children and adolescents with anxiety disorders. Journal of the American Academy of Child and Adolescent Psychiatry 46:267–283, 2007Crossref, MedlineGoogle Scholar

42 Hacker K, Penfold R, Zhang F, et al.: Impact of electronic health record transition on behavioral health screening in a large pediatric practice. Psychiatric Services 63:256–261, 2012LinkGoogle Scholar

43 Shadish WR, Cook TD, Campbell DT: Quasi-experiments: interrupted time-series designs; in Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Edited by Pranca K. Boston, Houghton Mifflin, 2002Google Scholar