The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
Published Online:https://doi.org/10.1176/appi.ps.201500550

Abstract

Objective:

This study examined the effects of a depression care quality improvement (QI) intervention implemented by using Community Engagement and Planning (CEP), which supports collaboration across health and community-based agencies, or Resources for Services (RS), which provides technical assistance, on training participation and service delivery by primarily unlicensed, racially and ethnically diverse case managers in two low-income communities in Los Angeles.

Methods:

The study was a cluster-randomized trial with program-level assignment to CEP or RS for implementation of a QI initiative for providing training for depression care. Staff with patient contact in 84 health and community-based programs that were eligible for the provider outcomes substudy were invited to participate in training and to complete baseline and one-year follow-up surveys; 117 case managers (N=59, RS; N=58, CEP) from 52 programs completed follow-up. Primary outcomes were time spent providing services in community settings and use of depression case management and problem-solving practices. Secondary outcomes were depression knowledge and attitudes and perceived system barriers.

Results:

CEP case managers had greater participation in depression training, spent more time providing services in community settings, and used more problem-solving therapeutic approaches compared with RS case managers (p<.05).

Conclusions:

Training participation, time spent providing services in community settings, and use of problem-solving skills among primarily unlicensed, racially and ethnically diverse case managers were greater in programs that used CEP rather than RS to implement depression care QI, suggesting that CEP offers a model for including case managers in communitywide depression care improvement efforts.

Depression is common across racial, ethnic, and socioeconomic groups (1), it affects health, cost, and productivity outcomes (2,3), and it is a leading cause of disability (46). Compared with whites, African Americans report lower lifetime prevalence of major depression, but they have more severe symptoms (7,8), use fewer services, and terminate treatment early (7,9,10). African Americans and Latinos are less likely than whites to receive evidence-based depression care and have worse outcomes (8,9,1113).

Although evidence-based practices for depression exist, their implementation in community settings has been limited (11,14). Quality improvement (QI) programs for depression that are based on the collaborative care model improve depressive symptoms, quality of life, and social outcomes and reduce disparities in outcomes among racial and ethnic groups (12). However, these programs are often not implemented in underresourced communities, where there is low availability of specialty care and historical distrust of services (13,15). In such settings, depressed individuals may seek support from alternative sectors that are not included in health care QI efforts and where unlicensed providers may be the primary supports for depressed individuals. The Affordable Care Act (ACA) provides incentives for health homes to utilize case managers (16,17), but these roles in collaborative care settings are typically filled by licensed providers (18,19). Little is known about how depression QI affects implementation outcomes, such as the use of therapeutic practices, among case managers who primarily are unlicensed providers (20).

This study addressed this gap by analyzing data from Community Partners in Care (CPIC), a cluster-randomized trial of two implementation conditions of a depression QI program (21). The two conditions were Community Engagement and Planning (CEP), an intervention to support networks of health and community-based agencies, and Resources for Services (RS), an intervention that provides technical assistance to individual agencies. Following community feedback, health and community-based programs were included as sites for the depression QI programs (21). Inclusion of unlicensed in addition to licensed case managers was motivated by the communities expressed awareness of shortages of mental health professional in underresourced communities (22,23) and by a desire to increase provider diversity and community trust. Chung and others (24) reported that compared with RS, the CEP condition was associated with greater increases in participation in depression training sessions among staff estimated as eligible by participating program administrators. However, change in practice outcomes for predominantly unlicensed case managers was not reported.

We sought to replicate findings of increased training participation associated with CEP compared with RS among case managers who were participating in a provider outcomes substudy. We also sought to examine the interventions’ effects on case managers’ use of depression case management and therapeutic problem solving and on the amount of time they spent delivering community services (primary outcomes). We also sought to examine the interventions’ effects on case managers’ depression knowledge, attitudes, and perceived system barriers (secondary outcomes). We hypothesized that case managers in programs assigned to CEP would participate more in training activities, spend more time providing community services, and report higher use of depression case management and therapeutic services compared with case managers in programs assigned to RS. We expected that CEP would have greater effects than RS on improving depression knowledge and attitudes and perceived system barriers, owing to a greater focus in CEP on network development.

Methods

Data were from the provider substudy of CPIC, a group-level, randomized trial implemented by using community-partnered participatory research (CPPR) (25,26), which supports academic and community partners as equal decision makers in research design and implementation (27). As described elsewhere (21,24,28), the study was conducted in South Los Angeles, with a population of roughly 1.5 million, and Hollywood–Metro Los Angeles, with a population of roughly 500,000—both areas with high rates of unemployment, homelessness, and lack of insurance (29).

The interventions represented two ways of implementing a depression QI training program based on the collaborative care model (3032) that was adapted for licensed and unlicensed providers (33,34) from health and community-based services programs. Toolkits for case managers included guidelines for case management, depression screening, care coordination, outreach strategies, problem-solving therapy, behavioral management, and activation skills. Materials were introduced in kick-off conferences prior to enrollment and randomization and were available to participants in both conditions as hard copies, in flash drives, and on a Web site (www.communitypartnersincare.org/community-engagement/cep/). The interventions encouraged, but did not require, use of these resources by eligible providers.

The Interventions

CEP invited participating program administrators to attend two-hour, bimonthly council meetings for four months to adapt depression QI toolkits to their community and collaborate as a network, following a workbook based on principles of CPPR. Councils were asked to develop and implement a written plan for adaptation of the toolkit, including training and monitoring practices, supported by $15,000 from the study. Final plans featured conferences, follow-up with programs, telephone and Webinar supervision by intervention experts for cognitive-behavioral therapy (CBT) and case management, and innovations such as provider self-care and depression book clubs.

CEP case manager training sessions were co-led by academic and community leaders. Eligible providers were invited to attend half-day or all-day conferences as well as follow-up and make-up sessions at individual programs. The six-hour case manager training sessions reviewed the study’s purpose, defined terms, and presented resources. Participants were taught about client engagement and outreach, use of the Patient Health Questionnaire–9 (PHQ-9) for depression screening, behavioral activation, and making referrals and problem solving, all of which was reinforced by role-play. Participation in other components, such as medication management, was encouraged. Telephone supervision of case managers (two to three sessions per community) by intervention experts was offered.

The RS intervention offered outreach and technical assistance to individual programs by using a “train-the-trainer” approach. A team made up of psychiatrists, a nurse care manager, a CBT trainer, a QI expert, support staff, and a community-engagement specialist offered 12 Webinars lasting 90 to 120 minutes on team management, CBT, care management, and patient education as well as visits to primary care sites to support medication management. Case managers in each community received four Webinars, each lasting one to hours. Topics included client engagement, depressive symptom recognition, screening using the PHQ-9, making referrals, and use of problem-solving strategies.

Participants

County lists and community nominations were used to identify the names of 149 agencies for possible inclusion. Of those, 29 were ineligible, 41 refused, and 19 could not be reached. Among 60 eligible participating agencies, there were 194 programs, from which 133 potentially eligible programs (serving ≥15 clients per week, having one or more staff, and not focusing on psychotic disorders or home services) were randomly assigned to the CEP or RS intervention (21).

Within each community, programs were paired into units or clusters of smaller programs and were paired based upon location, service sector, size, population served, and other program characteristics; two larger agencies were their own single stratum. Within pairs, one program or cluster was randomly assigned to CEP and the other to RS (21,24,35).

At site visits to confirm eligibility and participation, 20 programs were ineligible, 18 programs refused to participate, and 95 (84% of the 113 eligible programs) enrolled. [A flow diagram summarizing provider participation throughout the study is available as an online supplement to this article.] Site visits were conducted by staff blinded to program assignment (21). Program administrators were informed of intervention status by letter prior to screening. Zip code–level data for the neighborhoods of participating and nonparticipating programs were comparable (21).

Provider Outcomes Substudy

In eligible programs having more than one permanent staff member in addition to the administrator (N=84), providers with direct patient contact, including volunteers, were invited to participate in the provider substudy through agency presentations and distribution of recruitment packets by administrators as well as through telephone follow-up and study site visits. The goal was to achieve a baseline sample of 300 providers and a one-year follow-up sample of 200 providers. From a pool of 370 providers who had provided verbal and written consent at baseline, 326 (88%) from 77 programs completed baseline surveys. New providers were permitted to enter after baseline. At one-year follow-up, 392 providers from 84 programs were approached for the online survey, 297 were considered eligible, 92 had left the program, one was on medical leave, and one was deceased. Of 297 eligible, 237 (80%) participated, representing 75 of the 84 eligible programs. The sample for this substudy included 117 case managers (CEP, N=58; RS, N=59) from 52 programs. Institutional review boards at the RAND Corporation, the University of California, Los Angeles, and participating agencies approved this study, which was registered as a clinical trial after baseline enrollment.

Measures

The main independent variable was program intervention status (CEP or RS). Service sector (health or community-based program) was assigned for each program based on the services it provided. Other measures were obtained from baseline and follow-up surveys of providers, and sign-in logs were used to document participation in study-provided training. Self-report measures were based in part on measures used in the Partners in Care study, adapted for licensed and unlicensed providers (36). Personal depression stigma was assessed by the mean score for agreement with three items adapted from Link’s Devaluation and Discrimination Scale (37). Standardized alpha coefficients were calculated for scales with two or more items. The relation of self-report training participation to the training logs was evaluated as an indicator of validity for that item, and we examined the pattern of correlations among provider outcomes as a preliminary indicator of convergent and discriminant validity [see online supplement], with an expectation of higher agreement among measures related to depression practices and lower agreement of these items to measures not directly reflecting use of depression techniques, for example, attitudes, knowledge, and system barriers.

Sample characteristics.

Case manager characteristics included age, sex, education, race-ethnicity, and license status.

Primary outcomes.

Participation in training, including study-provided and in-house training events, was assessed by sign-in logs or by self-report at follow-up.

Depression care techniques were measured by the mean score for nine items assessing how often respondents performed the following tasks in the past six months for people with symptoms of depression: encourage positive thinking, discuss costs of alternative mental health treatments, encourage pleasurable activities, discuss ways to improve social skills, determine depression treatment preferences, recommend ways to take care of oneself, reframe or clarify the individual’s problems, discuss benefits of treatments, and help the individual feel better about his or her life. Responses were rated on a 5-point scale ranging from 1, never, to 5, always (α=.929). Responses by case managers who did not provide services for depressed clients were set to “never.”

Depression case management was measured by the mean score for five items assessing how often respondents who had provided services for depressed clients in the past six months had performed the following tasks: explain what depression is, ask the individual what he or she thinks depression is, ask about prior treatment, make a referral, and ask about barriers to depression care. Responses were rated on a 5-point scale ranging from 1, never, to 5, always (α=.917). Responses by case managers who did not provide services for depressed clients were set to “never.”

As a measure of provision of community services, respondents were asked to indicate on a 6-point ranked scale how many hours in a typical week they spent providing services to individuals in community settings, with 0 indicting 0 hours; 1, 1–10 hours; 2, 11–20 hours; 3, 21–30 hours; 4, 31–40 hours; and 5, >40 hours.

Secondary outcomes.

Perceived depression knowledge was assessed by the mean score for agreement with three items adapted from Partners in Care (“Depression is a medical condition,” “Depression runs in families,” and “Depression can cause physical changes like aches and pains”) (10,36). Responses were rated on a 5-point scale ranging from 1, strongly agree, to 5, strongly disagree (α=.627).

Perceived depression skill was assessed by the mean score for seven items assessing the following specific skills: case finding, depression screening with a standardized instrument, educating individuals or families about depression, depression counseling, referral to mental health specialty care, providing social support for depression (support groups), and engaging in community outreach for depression. Responses were rated on a 4-point scale ranging from 1, not at all skilled, to 4, very skilled (α=.890).

Personal depression stigma was assessed by the mean score for agreement with three items adapted from Link’s Devaluation and Discrimination Scale (37) (“I have no patience with a person who is always feeling 'blue' or depressed,” “I would be embarrassed if people thought I was depressed,” and “Most people think less of a person who has been depressed”). Responses were rated on a 5-point scale ranging from 1, strongly agree, to 5, strongly disagree (α=.579).

Perceived system barriers were assessed by a count of four items assessing the extent to which optimal depression care services were limited in the past six months by difficulty obtaining treatment, unavailability of mental health professionals, poor reimbursement, limited insurance, other benefits, or other barriers. Responses were dichotomized as limited a great deal versus limited somewhat or not limited.

As expected, measures of depression care techniques and care management were strongly positively correlated, supporting convergent validity [see online supplement]. Measures of depression knowledge and attitudes were not significantly associated with depression care techniques, supporting discriminant validity. Further, perception of depression skill was positively associated with depression care techniques and depression case management. Personal depression stigma was positively associated with depression case management, and community services provision was associated with number of system barriers.

Analysis Plan

We conducted intent-to-treat, comparative-effectiveness analyses of outcomes for case managers in the provider substudy with one-year follow-up data. We used logistic regression models for dichotomous variables and multiple linear regression models for continuously scaled variables, both adjusted for baseline status of the dependent variable, sector (heath versus social-community), and provider type (licensed versus unlicensed). There was no baseline status for training participation. We compared baseline characteristics of case managers by intervention status.

We used an extended hot-deck technique to impute missing values for nonresponse, using five imputed data sets for baseline and follow-up responses and multiple imputation inference for all analyses (21,38). To control for potential nonresponse bias, we used nonresponse weighting (39) to address missing data for the 20% of providers who did not complete one-year follow-up. The objective of nonresponse weighting is to extrapolate from the observed one-year sample to the original eligible sample. Nonresponse weights were constructed by fitting logistic regression models to predict follow-up status from baseline characteristics. Separate models were fitted for each intervention group. The final logistic regression model included predictors that were significant (p<.10) for either CEP or RS groups (service sector [health vs. community based], education, and baseline perception of depression attitude and skill). Five versions of the weight were created corresponding to imputed data sets (39,40). Significance of comparisons by intervention status is based on regression coefficients. Results are presented as between-group differences for linear regression and as odds ratios (ORs) with 95% confidence intervals (CIs) for logistic regression.

Average results for an intervention group were adjusted for all covariates by using standardized predictions generated from the fitted regression model (39). To account for client clustering within programs, the variance estimation was based on the Taylor series linearization method (40) All analyses were conducted by using SUDAAN, version 11.0 (Software for the Statistical Analysis of Correlated Data [www.rti.org/sudaan/]), which contains a design specification for sampling with replacement in the first stage of sample selection (programs), accounting for attrition weights.

For primary outcomes, to account for multiple comparisons, we calculated the false discovery rate (FDR), for comparing observed significance findings with expected order statistics from a uniform distribution (41).

Results

Demographic and Descriptive Characteristics

As shown in Table 1, 47% of case managers self-identified as Hispanic, 45% as African American, 6% as non-Hispanic white, and 2% as Asian, Pacific Islander, or other race-ethnicity. The mean age was 43.6 years, and 71% were female. Over 70% were unlicensed, with over half (62%) working in the health sector. There were no significant baseline differences by intervention status, except CEP participants had higher mean scores for personal depression stigma compared with RS participants (p<.05).

TABLE 1. Baseline characteristics of 117 case managers at programs that implemented depression care quality improvement by using Resources for Services (RS) or Community Engagement and Planning (CEP)a

Total (N=117)RS (N=59)CEP (N=58)
CharacteristicN%N%N%χ2dfp
Health sector696240702954.91.343
Unlicensed877345764270.171.676
Age (M±SD)43.6±1.643.4±2.543.7±2.1.011.922
Female827138654476.971.325
Race-ethnicity.363.948
 Hispanic554726452949
 Black or African American524527462545
 Non-Hispanic white764735
 Otherb322312
Some college or above105905390529001.985
Depression care techniques (M±SD)c2.6±.12.5±.22.7±.2.381.537
Depression case management (M±SD)d2.5±.12.6±.22.5±.2.461.499
Community services provision (M±SD)e1.4±.21.1±.21.6±.33.071.08
Perception of depression knowledge (M±SD)f2.1±.12±.12.1±.1.451.502
Perception of depression skill (M±SD)g2.2±.12.3±.12.2±.1.391.533
Personal depression stigma (M±SD)h3.8±.13.7±.14.0±.15.171.023
N of system barriers (M±SD)i.8±.1.9±.2.7±.2.31.582

a Data were multiply imputed and weighted for eligible sample. Chi-square test was used for a comparison between the two groups, accounting for the design effect of the cluster randomization. Percentages may not add to 100% because of rounding.

b Asian, Pacific Islander, or other

c Possible scores range from 1 to 5, with higher scores indicating greater use of depression care techniques.

d Possible scores range from 1 to 5, with higher scores indicating greater use of depression case management tasks.

e Possible scores range from 0 to 5, with higher scores indicating more hours providing services.

f Possible scores range from 1 to 5, with lower scores indicating greater depression knowledge.

g Possible scores range from 1 to 4, with higher scores indicating greater perception of skills.

h Possible scores range from 1 to 5, with higher scores indicating less depression stigma.

i Possible scores range from 0 to 4, indicating the number of barriers to optimal depression services out of a list of 4.

TABLE 1. Baseline characteristics of 117 case managers at programs that implemented depression care quality improvement by using Resources for Services (RS) or Community Engagement and Planning (CEP)a

Enlarge table

Outcomes

As shown in Table 2, 27% of case managers in RS and 74% in CEP participated in CPIC-sponsored training (OR=7.78, p<.001). CEP case managers reported use of more depression care techniques compared with RS case managers (3.1 and 2.8, respectively; difference=.32, CI=.03–.61, p<.05). CEP case managers had higher mean scores for provision of community services compared with RS case managers (1.2 and .7, respectively; difference=.51, CI=.13–.89, p<.05). Findings remained significant after applying the FDR for multiple comparisons. Differences between mean scores for depression case management for participants in CEP or RS were not statistically significant.

TABLE 2. Training participation rates and care practices at one-year follow-up among 117 case managers at programs that implemented depression care quality improvement by using Resources for Services (RS) or Community Engagement and Planning (CEP)

Unadjusted estimatesaAdjusted analysisb
VariableRSCEPRSCEPCEP vs. RS
Total NN%Total NN%pEstimate (%)95% CIEstimate (%)95% CIOR95% CItdfpp (adj.)c
Training participation551527574274<.0012715.9 to 41.873.957.9 to 85.47.782.9 to 20.894.250<.001<.001
Total NMSDTotal NMSDpM95% CIM95% CIBetween-group difference95% CItdfpp (adj.)c
Depression care techniques scored412.51.1443.01.1.042.82.5 to 3.03.12.9 to 3.3.32.03 to .612.257.031.042
Depression case management scoree472.81.3502.81.2.8932.82.5 to 3.12.92.6 to 3.1.05–.30 to .41.343.767.767
Community services provision scoref58.6 .9581.31.4.003.7.4 to .91.2.9 to 1.5.51.13 to .892.762.009.018

a Raw data without weighting or imputation. Total Ns reflect number of respondents at one-year follow-up.

b Adjusted analyses used multiply imputed data (N=117); data were weighted for eligible sample for enrollment; a logistic regression model for the binary variable (training participation) and linear regression models for continuous variables adjusted for baseline status of the dependent variable, sector (health care versus social-community), and provider type (licensed vs. unlicensed) and accounted for the design effect of the cluster randomization.

c Adjusted by the False Discovery Rate procedure

d Possible scores range from 1 to 5, with higher scores indicating greater use of depression care techniques.

e Possible scores range from 1 to 5, with higher scores indicating greater use of depression case management tasks.

f Possible scores range from 0 to 5, with higher scores indicating more hours providing services.

TABLE 2. Training participation rates and care practices at one-year follow-up among 117 case managers at programs that implemented depression care quality improvement by using Resources for Services (RS) or Community Engagement and Planning (CEP)

Enlarge table

As shown in Table 3, there were no significant differences in secondary outcomes (p≥.05) between participants in CEP or RS.

TABLE 3. Scores for depression knowledge and attitudes at one-year follow-up among 117 case managers at programs that implemented depression care quality improvement by using Resources for Services (RS) or Community Engagement and Planning (CEP)

Adjusted analysisa
Unadjusted estimatesbRSCEPCEP vs. RS
VariableTotal NRSCEPpEstimate (M)95% CIEstimate (M)95% CIBetween-group differencec95% CItdfp
Perceived depression knowledged1042.0±.72.1±.9.3662.01.8 to 2.22.11.8 to 2.4.09–.25 to .42.5265.605
Perception of depression skille972.2±.72.3±.7.7092.32.2 to 2.52.42.2 to 2.6.07–.15 to .3.6729.505
Personal depression stigmaf1123.8±.74±.6.0613.83.7 to 3.93.93.8 to 4.08–.1 to .25.9235.366
N of system barriersg57.7±1.2.1±.3.008.7.4 to .9.3.1 to .6–.33–.7 to .05–1.7641.085

a Adjusted analyses used multiply imputed data (N=117). Data were weighted for eligible sample for enrollment; linear regression models adjusted for baseline status of the dependent variable, sector (health care vs. social-community), and provider type (licensed versus unlicensed) and accounted for the design effect of the cluster randomization.

b Raw data without weighting or imputation. The total N reflects the number of respondents at one-year follow-up.

c Difference between the groups’ estimated mean scores

d Possible scores range from 1 to 5, with lower scores indicating greater depression knowledge.

e Possible scores range from 1 to 4, with higher scores indicating greater perception of skills.

f Possible scores range from 1 to 5, with higher scores indicating less depression stigma.

g Possible scores range from 0 to 4, indicating the number of barriers to optimal depression services out of a list of 4.

TABLE 3. Scores for depression knowledge and attitudes at one-year follow-up among 117 case managers at programs that implemented depression care quality improvement by using Resources for Services (RS) or Community Engagement and Planning (CEP)

Enlarge table

Discussion

We examined effects of two interventions to implement depression QI across underresourced communities—CEP for supporting networks and RS for providing technical assistance to individual agencies. Case managers were predominantly unlicensed and from racial-ethnic minority groups, representing a resource for expanding workforce diversity given known disparities in services access. We found that the percentage of case managers in CEP programs who participated in training was nearly three times higher than the percentage in RS programs, confirming similar findings for all eligible providers (24). Compared with RS case managers, CEP case managers reported delivering community services for a greater number of hours and greater use of therapeutic problem-solving skills for depression. However, we found no differences between interventions in case management tasks, which may be standard competencies of case managers or may be easier to influence with technical assistance (42). The stronger CEP effect on increasing use of therapeutic strategies may be due to the fact that those skills are not considered standard competencies for case managers and to the role modeling and supervision provided by CEP. Contrary to expectations, we did not find significant intervention effects on attitudes or perceived skill. CEP participants demonstrated greater knowledge and reported fewer system barriers compared with RS participants, but the differences were of borderline significance, suggesting areas for future research.

Others have noted that providing staff with information prior to training can improve self-efficacy and motivation (43), which can increase participation (44). In CPIC, providers in both interventions could participate in conferences providing an overview of depression toolkits prior to randomization, representing equivalent preassignment and preintervention exposure. The RS intervention supported knowledge exchange, whereas CEP supported in-person guidance, skills building (self-efficacy), modeling (observational learning), peer support, networking, and collaboration. Under self-efficacy theory (45), these CEP features may have increased case managers’ confidence in initiating and sustaining behavior change rather than avoiding new tasks (46). Torrey and others (47), for example, found that clinicians were motivated to change practice if the change was perceived as clinically helpful and reinforced through observation, supervision, and feedback. Further, the CEP focus on administrator involvement and community feedback may have enhanced collective efficacy (48,49) to motivate an expansion of case managers’ roles.

Future research should focus on ways to further build and sustain capacity for depression services, as required for medical homes (16,17), given that CEP supported a case manager workforce predominantly from racial-ethnic minority groups in underresourced communities. Although we did not focus on the link between provider change and client outcomes, we have previously reported that CEP improved client mental health–related quality of life and reduced behavioral health hospitalizations over six to 12 months compared with RS, with significance sensitive to alternative statistical methods (24). In addition, it will be important to determine whether differences between interventions in provider outcomes resulted from differences in training participation.

Several limitations should be acknowledged. Data were from provider self-report in two communities, meriting replication with service records or observation and further research on the validity and content of provider outcome measures. Many of the survey questions were worded in one direction, so that a higher score represented more appropriate practice, for example. Knowledge and attitude measures could be expanded to include theoretical constructs (5052) related to intervention adoption and sustainability. The case manager sample was modest for observing small effects that are typical for public health implementation strategies. There was attrition at follow-up primarily because of turnover, which may be expected in underresourced communities and which was accounted for with weights.

Conclusions

This study suggests that depression QI programs can feasibly include predominantly unlicensed, racially and ethnically diverse case managers. Case managers in CEP had greater participation in training, spent more time in delivering community services, and made more frequent use of therapeutically oriented problem-solving skills compared with case managers in RS. As one CEP community leader noted, “This offers hope for underresourced communities.” It may be important to identify policy mechanisms to sustain such efforts by working with the Health Resources and Services Administration, the Substance Abuse and Mental Health Services Administration, the Centers for Medicare and Medicaid Services, and health plans or foundations to certify unlicensed case managers as health workers, develop formal partnerships, and access funding.

Dr. Landry is with the Center for Health Services and Society, University of California, Los Angeles (UCLA), Los Angeles (e-mail: ). Dr. Jackson is with the Department of Social Welfare, Luskin School of Public Affairs, UCLA, Los Angeles. Dr. Tang, Dr. Miranda, Dr. Chung, and Dr. Wells are with the Department of Psychiatry and Biobehavioral Sciences, David Geffen School of Medicine, UCLA, Los Angeles. Dr. Tang, Dr. Miranda, and Dr. Chung are also with the Center for Health Services and Society, Semel Institute for Neuroscience and Human Behavior, UCLA, Los Angeles, and Dr. Wells is also with the RAND Corporation, Santa Monica, California. Ms. Jones is with Healthy African American Families II, Los Angeles. Dr. Ong is with the Department of Medicine, David Geffen School of Medicine, UCLA, and the U.S. Department of Veterans Affairs Greater Los Angeles Healthcare System, Los Angeles.

This study was funded by grants from the Robert Wood Johnson Foundation (64244); the National Institute on Minority Health and Health Disparities (R01MD007721); and for the parent Community Partners in Care (CPIC) study, the National Institute of Mental Health (R01MH078853, P30MH082760, and P30MH068639), the California Community Foundation (CMCH-12-97088), the National Library of Medicine (G08LM011058), and the NIH/National Center for Advancing Translational Science (to UCLA) (CTSI UL1TR000124). The study is registered at clinical trials.gov (NCT01699789).

The authors report no financial relationships with commercial interests.

The authors thank the RAND Corporation, UCLA Semel Institute, and the Los Angeles County Departments of Mental Health, Public Health, and Health Services for institutional support. They also thank the 95 participating health care and community-based agencies, the CPIC council, and academic and community recipients of the Association of Clinical and Translational Science Team Science Award (2014) and Campus-Community Partnerships for Health 2015 Annual Award for CPIC.

References

1 Bruce ML, Smith W, Miranda J, et al.: Community-based interventions. Mental Health Services Research 4:205–214, 2002Crossref, MedlineGoogle Scholar

2 Luber MP, Hollenberg JP, Williams-Russo P, et al.: Diagnosis, treatment, comorbidity, and resource utilization of depressed patients in a general medical practice. International Journal of Psychiatry in Medicine 30:1–13, 2000Crossref, MedlineGoogle Scholar

3 Greenberg PE, Kessler RC, Birnbaum HG, et al.: The economic burden of depression in the United States: how did it change between 1990 and 2000? Journal of Clinical Psychiatry 64:1465–1475, 2003Crossref, MedlineGoogle Scholar

4 Mark TL, Shern DL, Bagalman JE, et al.: Ranking America’s Mental Health: An Analysis of Depression Across the States. Alexandria, Va, Mental Health America, 2007Google Scholar

5 Kessler RC, Chiu WTC, Demler O, et al.: Prevalence, severity, and comorbidity of 12-month DSM-IV disorders in the National Comorbidity Survey Replication. Archives of General Psychiatry 62:617–627, 2005Crossref, MedlineGoogle Scholar

6 Depression Fact Sheet. Geneva, World Health Organization, April 2016. http://www.who.int/mediacentre/factsheets/fs369/en/Google Scholar

7 Williams DR, González HM, Neighbors H, et al.: Prevalence and distribution of major depressive disorder in African Americans, Caribbean blacks, and non-Hispanic whites: results from the National Survey of American Life. Archives of General Psychiatry 64:305–315, 2007Crossref, MedlineGoogle Scholar

8 Riolo SA, Nguyen TA, Greden JF, et al.: Prevalence of depression by race/ethnicity: findings from the National Health and Nutrition Examination Survey III. American Journal of Public Health 95:998–1000, 2005Crossref, MedlineGoogle Scholar

9 Jones D, Franklin C, Butler BT, et al.: The Building Wellness Project: a case history of partnership, power sharing, and compromise. Ethnicity and Disease 16(suppl 1):S54–S66, 2006MedlineGoogle Scholar

10 Wells KB: The design of Partners in Care: evaluating the cost-effectiveness of improving care for depression in primary care. Social Psychiatry and Psychiatric Epidemiology 34:20–29, 1999Crossref, MedlineGoogle Scholar

11 Glasgow RE, Lichtenstein E, Marcus AC: Why don’t we see more translation of health promotion research to practice? Rethinking the efficacy-to-effectiveness transition. American Journal of Public Health 93:1261–1267, 2003Crossref, MedlineGoogle Scholar

12 Wells KB, Sherbourne CD, Miranda J, et al.: The cumulative effects of quality improvement for depression on outcome disparities over 9 years: results from a randomized, controlled group-level trial. Medical Care 45:1052–1059, 2007Crossref, MedlineGoogle Scholar

13 Suite DH, La Bril R, Primm A, et al.: Beyond misdiagnosis, misunderstanding and mistrust: relevance of the historical perspective in the medical and mental health treatment of people of color. Journal of the National Medical Association 99:879–885, 2007MedlineGoogle Scholar

14 Proctor EK, Landsverk J, Aarons G, et al.: Implementation research in mental health services: an emerging science with conceptual, methodological, and training challenges. Administration and Policy in Mental Health and Mental Health Services Research 36:24–34, 2009Crossref, MedlineGoogle Scholar

15 Miranda J, McGuire TG, Williams DR, et al.: Mental health in the context of health disparities. American Journal of Psychiatry 165:1102–1108, 2008LinkGoogle Scholar

16 DeSilva M, Samele C, Saxena S, et al.: Policy actions to achieve integrated community-based mental health services. Health Affairs 33:1595–1602, 2014CrossrefGoogle Scholar

17 Katzen A, Morgan M: Affordable Care Act Opportunities for Community Health Workers. Boston, Center for Health Law & Policy Innovation Harvard Law School, 2014Google Scholar

18 Gilbody S, Bower P, Fletcher J, et al.: Collaborative care for depression: a cumulative meta-analysis and review of longer-term outcomes. Archives of Internal Medicine 166:2314–2321, 2006Crossref, MedlineGoogle Scholar

19 Taylor EF, Machta RM, Meyers DS, et al.: Enhancing the primary care team to provide redesigned care: the roles of practice facilitators and care managers. Annals of Family Medicine 11:80–83, 2013Crossref, MedlineGoogle Scholar

20 Fixsen DL, Naoom SF, Blase KA, et al.: Implementation Research: A Synthesis of the Literature. Tampa, University of South Florida, 2005Google Scholar

21 Wells KB, Jones L, Chung B, et al.: Community-partnered cluster-randomized comparative effectiveness trial of community engagement and planning or resources for services to address depression disparities. Journal of General Internal Medicine 28:1268–1278, 2013Crossref, MedlineGoogle Scholar

22 Boer P, Wiersma D, Russo S, et al: Paraprofessional for anxiety and depressive disorders. Cochrane Reviews 2:CD004688, 2005Google Scholar

23 Rivelli SK, Shirey KG: Prevalence of psychiatric symptom/syndromes in medical settings; in Integrated Care in Psychiatry: Redefining the Role of Mental Health Professionals in the Medical Setting. New York, Springer, 2014Google Scholar

24 Chung B, Ngo VK, Ong MK, et al.: Participation in training for depression care quality improvement: a randomized trial of community engagement or technical support. Psychiatric Services 66:831–839, 2015LinkGoogle Scholar

25 Jones L, Wells K: Strategies for academic and clinician engagement in community-participatory partnered research. JAMA 297:407–410, 2007Crossref, MedlineGoogle Scholar

26 Wells K, Jones L: “Research” in community-partnered, participatory research. JAMA 302:320–321, 2009Crossref, MedlineGoogle Scholar

27 Wallerstein NB, Duran B: Using community-based participatory research to address health disparities. Health Promotion Practice 7:312–323, 2006Crossref, MedlineGoogle Scholar

28 Chung B, Ong M, Ettner SL, et al.: 12-month outcomes of community engagement versus technical assistance to implement depression collaborative care: a partnered, cluster, randomized, comparative effectiveness trial. Annals of Internal Medicine 161(suppl):S23–S34, 2014Crossref, MedlineGoogle Scholar

29 Key Indicators of Health. Los Angeles County, Department of Health Services, 2009Google Scholar

30 Wells KB, Sherbourne C, Schoenbaum M, et al.: Impact of disseminating quality improvement programs for depression in managed primary care: a randomized controlled trial. JAMA 283:212–220, 2000Crossref, MedlineGoogle Scholar

31 Miranda J, Duan N, Sherbourne C, et al.: Improving care for minorities: can quality improvement interventions improve care and outcomes for depressed minorities? Results of a randomized, controlled trial. Health Services Research 38:613–630, 2003Crossref, MedlineGoogle Scholar

32 Unützer J, Katon W, Callahan CM, et al.: Collaborative care management of late-life depression in the primary care setting: a randomized controlled trial. JAMA 288:2836–2845, 2002Crossref, MedlineGoogle Scholar

33 Wennerstom A, Vannoy S, Allen C, et al.: Community-based participatory development of a community health worker mental health outreach role to extend collaborative care in post Katrina New Orleans. Ethnicity and Disease 21:S1–45–51, 2011Google Scholar

34 Springgate B, Allen C, Jones C, et al.: Rapid community participatory assessment of health care in post storm New Orleans. American Journal of Preventative Medicine 37:S237–S243, 2009Crossref, MedlineGoogle Scholar

35 Murray D, Varnell SP, Blitstein J: Design and analysis of group randomized trials: a review of recent methodological developments. American Journal of Public Health 94:423–432, 2004Crossref, MedlineGoogle Scholar

36 Meredith LS, Jackson-Triche M, Duan N, et al.: Quality improvement for depression enhances long-term treatment knowledge for primary care clinicians. Journal of General Internal Medicine 15:868–877, 2000Crossref, MedlineGoogle Scholar

37 Link BG, Struening EL, Rahav M, et al.: On stigma and its consequences: evidence from a longitudinal study of men with dual diagnoses of mental illness and substance abuse. Journal of Health and Social Behavior 38:177–190, 1997Crossref, MedlineGoogle Scholar

38 Rubin D: Multiple Imputation for Nonresponse in Surveys. Hoboken, NJ, John Wiley and Sons, 1987CrossrefGoogle Scholar

39 Korn E, Graubard B: Analysis of Surveys. Hoboken, NJ, Wiley-Interscience, 1999Google Scholar

40 Binder D: On the variances of asymptotically normal estimators from complex surveys. International Statistical Review/Revue Internationale de Statistique 51:279–292, 1983Google Scholar

41 Benjamini Y, Hochberg Y: Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal Statistical Society. Series A, (Statistics in Society) 57:289–300, 1995Google Scholar

42 Nezu AM: Problem solving and behavior therapy revisited. Behavior Therapy 35:1–33, 2004CrossrefGoogle Scholar

43 Wei-Tao T: Effects of training framing, general self-efficacy and training motivation on trainees’ training effectiveness. Personnel Review 35:51–65, 2004Google Scholar

44 Carlson D, Bozeman S, Kacmar DP, et al.: Training motivation in organizations: an analysis of individual-level antecedents. Journal of Managerial Issues 12:271–287, 2000Google Scholar

45 Bandura A: Self Efficacy: The Exercise of Control. New York, WH Freeman, 1997Google Scholar

46 Parle M, Maguire P, Heaven C: The development of a training model to improve health professionals’ skills, self-efficacy and outcome expectancies when communicating with cancer patients. Social Science and Medicine 44:231–240, 1997Crossref, MedlineGoogle Scholar

47 Torrey WC, Drake RE, Dixon L, et al.: Implementing evidence-based practices for persons with severe mental illnesses. Psychiatric Services 52:45–50, 2001LinkGoogle Scholar

48 Ballesteros-Fernandez R, Nicolas-Diaz J, Bandura A: Determinants and structural relation of personal efficacy to collective efficacy. Applied Psychology 51:107–125, 2002CrossrefGoogle Scholar

49 Bandura A: Exercise of human agency through collective efficacy. Current Directions in Psychological Science 9:75–78, 2000CrossrefGoogle Scholar

50 Ajzen I: The theory of planned behavior. Organizational Behavior and Human Decision Processes 50:179–211, 1991CrossrefGoogle Scholar

51 Montano DE, Kaspryzyk D: Theory of reasoned action, theory of planned behavior, and the integrated behavioral model; in Health Behavior and Health Education: Theory, Research, and Practice. Edited by Glanz K, Rimer BK, Viswanath K. San Francisco, Jossey Bass, 2008Google Scholar

52 Rubenstein LV, Mittman BS, Yano EM, et al.: From understanding health care provider behavior to improving health care: the QUERI framework for quality improvement. Medical Care 38:I129–I141, 2000Crossref, MedlineGoogle Scholar