The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
Special ArticlesFull Access

Methodologies to Advance a “Science of How”: Identifying and Engaging Intervention Targets and Outcomes

Published Online:https://doi.org/10.1176/appi.ps.202100202

Abstract

Objective:

Although implementation science has taken hold in many areas of psychiatric services research, a need remains for developing effective, low-cost interventions for specific subpopulations with mental health conditions. The experimental therapeutics approach has gained momentum as a framework for developing effective interventions. However, few studies have taken steps to rigorously apply experimental therapeutics. This article provides a blueprint for applying this approach.

Methods:

A focused literature review was conducted to document the frequency of the application of experimental therapeutics among articles published between 2011 and 2021 in some of the American Psychiatric Association’s journals. Independently of the review, the authors delineated a four-component approach for applying experimental therapeutics in research and present practical, innovative strategies to advance psychiatric services research.

Results:

The four-component approach includes outlining prerequisites, identifying target mechanisms, proposing intervention strategies to address target mechanisms, and using advanced analytic methods. The strategies described for each component are not exhaustive; rather, they suggest promising avenues for research that can lead to more effective interventions and deeper understanding of how, and for whom, an intervention works.

Conclusions:

The application of experimental therapeutics in psychiatric services research can lead to increased development, refinement, and implementation of effective interventions for specific populations or conditions.

The National Institute of Mental Health (1, 2) has advocated for the use of experimental therapeutics in its research program. The use of experimental therapeutics enables researchers to open the black box of interventions and illuminate how change in outcomes occurs. This approach requires the delineation of intervention components and the interrogation of presumed mediating targets addressed by those components (3). Raghavan et al. (4) have described the approach and decomposed it with examples, explicating a framework for moving forward. However, little guidance exists on how to apply experimental therapeutics outside medicine, psychology, and public health. This article begins with a focused literature review to document the frequency of research explicitly engaging experimental therapeutics in psychiatric services research as published in some of the American Psychiatric Association’s periodicals. We then describe a menu of methods to guide researchers who seek to adopt the approach.

HIGHLIGHTS

  • The stepped approach outlined here offers researchers practical ways to apply experimental therapeutics in psychiatric services research.

  • The stepped approach includes methods chosen for their practicality, usability, and focus on identifying mechanisms of change and intervention components in partnership with stakeholders, along with analytical exploration of the relevance of individual components to meaningful change.

  • The application of experimental therapeutics has the potential for moving from a focus on whether a psychiatric intervention is effective to how the intervention makes an impact on a particular outcome and how to implement effective interventions into settings and communities—a focus on the “science of how.”

We reviewed articles published between 2011 and 2021 in some of the American Psychiatric Association’s periodicals (Psychiatric Services, The American Journal of Psychiatry, Psychiatric News, The Journal of Neuropsychiatry and Clinical Neurosciences, and Focus). Using the term, “experimental therapeutics,” we found 30 articles: 18 commentaries or editorials, 10 studies examining biological or neuroscientific phenomena, and two studies focused on human or social sciences. Only five of the articles examined mediation empirically, and none applied qualitative developmental research for experimental therapeutics purposes. This study aimed to discuss methods that can be used to apply the experimental therapeutics approach to gain new knowledge on how interventions work or do not work.

There is ample evidence of ineffective interventions, nonadherence to interventions, clinician resistance to evidence-based practices on the grounds of “fit” with client service plans, challenges of sustainability, implementation complexity, and excessive costs (5). Some of these problems can be addressed if researchers apply experimental therapeutics throughout the intervention development process. The methods we propose in this article are organized into four components (Figure 1), which researchers can draw on to embed experimental therapeutics in the development, refinement, and testing of interventions. These methods are not meant to be exhaustive; rather, our purpose is to describe strategies to move the field toward more acceptable, effective, and cost-efficient services and interventions.

FIGURE 1.

FIGURE 1. Overview of the stepped methodological approach for applying experimental therapeutics

Stepped Approach to Applying Experimental Therapeutics

Component 1: Outline Prerequisites

Identify a public health concern and potential targets for change.

An efficient first step is to conduct a systematic review of what is known in order to identify the most pressing public health needs and potential targets (mechanisms) to resolve significant problems for a given population. Systematic reviews reduce redundancy and result in knowledge to develop an initial conceptualization of the problem (6). Reviews that embed experimental therapeutics principles can point researchers toward effective or promising target mechanisms that are critical to understanding how an intervention affects outcomes and for whom (7). Systematic reviews can also uncover targets found to be effective in improving outcomes; they can also identify ineffective targets, thus directing teams to focus elsewhere (815). Finally, systematic reviews can discover differences in how individuals or groups respond to different intervention components or target mechanisms to help inform the tailoring of programs to the specific needs of subpopulations.

Incorporate genuine collaborative approaches.

Collaboration with stakeholders, such as community members and service users, is essential for long-term impact of interventions (16) and is instrumental throughout the process of applying experimental therapeutics principles to intervention development. This collaborative process includes formulating questions, identifying promising target mechanisms, and brainstorming intervention components. Researchers and advocates are calling for a deepening of engagement with diverse stakeholders, including service users (1720). Some authors (21, 22) have characterized stakeholder involvement in research as superficial, lacking rigor, and in need of conceptual frameworks. Studies that do not embrace genuine collaboration are at risk of tokenism (21) and are more likely to develop interventions and to focus on mechanisms that lack relevance (23). Importantly, effective collaborations are more likely to lead to sustainable interventions (24) and an increase in capacity among community leaders, who remain in the community after the research has been completed (24, 25).

One approach to expanding community- and stakeholder-engaged research is to enlarge practice-based research networks (PBRNs). PBRNs focus on deepening collaborations between researchers and provider participants in an effort to pose questions with concrete clinical implications (26, 27). PBRNs are a form of community-based participatory research and increasingly have included community members (28) as well as peer providers (29). Expanding PBRNs to include service users would be a natural next step and could lead to more clarity on how change in outcomes occurs and to an increase in power sharing. Another approach to deepen stakeholder engagement is to reconceptualize translational research from a pipeline to an interlocking loop model that centers service user involvement at every stage (17). Finally, measures to assess community and stakeholder engagement in research can promote accountability in research (3034). These measures can examine the quality, depth, consistency, transparency, and impact of community and stakeholder engagement in research. Such approaches may reduce the possibility of research and interventions that lack relevance for end users.

Component 2: Identify Promising Target Mechanisms

Several methods are useful for developing a program evaluation framework and for identifying potential mediators of change in outcome(s). Our approach is focused primarily on mediators of change, because they are the presumed variables that lie on the causal pathway between treatments or interventions and outcome effects.

Conduct in-depth qualitative research.

Qualitative studies can open the black box of interventions by eliciting an understanding of the processes that underlie change in behavior, which can help identify promising mediators of change (3537). Such studies use open-ended questions with extensive probing to uncover how, or in what ways, an intervention works or does not work. For example, in adult mental health care, research (3842) has validated the use of peer support. Yet, the field is only beginning to uncover what it is that peers do to make an impact (43, 44). Qualitative studies eliciting information from individuals who receive or provide peer support can help uncover the specific mechanisms, or pathways, to behavior change.

Qualitative research is also crucial for the development of valid and reliable measures of mechanisms or mediators when they do not yet exist, including necessary adaptations when measures are not valid for particular subpopulations. For example, researchers (45) have documented the need for measures that have strong psychometric properties for mechanisms detailed in the research domain criteria matrix (e.g., loss and sustained threat). Researchers can conduct studies with focus groups and individual interviews (e.g., elicitation studies) to inform the development of items and scales that accurately and reliably measure mechanisms tailored to the population of interest (46). Such development could overcome this limitation of psychiatry research, and collaboration with psychometricians is needed to develop measures after specific mechanistic targets have been identified.

Incorporate community-based system dynamics (CBSD).

CBSD is a process that can crowdsource intervention targets from community members (23, 47). It allows for the uncovering of relevant mechanisms through community engagement and enables the process of incorporating these mechanisms into system dynamic models. System dynamic models emphasize three stages: problem scoping and identification; core modeling, planning, and capacity building; and group model building workshops (23). Through such processes, system dynamic models construct representations of complex systems by using “stocks” (which are elements or properties within a system that can increase or decrease over time) and “flows” (which is a change in a stock over time) to identify the complex processes within the system (23). CBSD approaches both allow for identification of mediators or mechanisms from lived experience and collectively reduce a large set of possible mediators or mechanisms to a more manageable number suitable for an intervention or implementation study (48).

The CBSD approach has been used to help prioritize interventions and implementation targets for suicide prevention (49), to identify factors that lead to help seeking for behavioral health problems (50), and to uncover factors that impede mental health service utilization (51). These variables, uncovered through CBSD, are targets for which researchers can construct specific intervention and implementation strategies. Uncovering mechanisms through community engagement, rather than by listening solely to academic investigators, may be a more sustainable and valid way of eliciting mechanisms and outcomes of importance.

Use concept mapping.

Concept mapping is a mixed-methods participatory group approach. This approach combines methods to represent perspectives of the group on problem resolution and paths to resolution (e.g., mechanisms) by using visual maps (52). Concept mapping consists of brainstorming around the project question (either in person or online), synthesis of ideas, organization and sorting of ideas, evaluation of ideas on the basis of relevant dimensions (e.g., feasibility, cost, and relative importance), representation of ideas on visual maps, interpretation of the maps, engaging in a rating process, and deciding how to proceed (52, 53).

Concept mapping can be used to uncover intervention targets. For example, Onken (54) investigated the concept of a “supportive community” in mental health and found the constructs of participants’ basic needs, legal rights, and community education, and availability of community services, as its most important dimensions. Given this finding, interventionists interested in enhancing supportive communities can further operationalize these constructs and use them as targets for intervention studies. In addition to such analysis, concept mapping can be used to uncover mechanisms of how implementation strategies work or do not work. For example, Sommerfeld and colleagues (55) used concept mapping to uncover factors associated with the implementation of cognitive-behavioral social skills training (CBSST) within assertive community treatment programs. They conducted focus groups with 87 stakeholders, which led to an informative visual map with 14 mechanisms deemed important to successful implementation (55). After sorting and rating the mechanisms on importance and changeability, a smaller set of mechanisms emerged as most salient (e.g., training support, alignment of leadership, and perceived benefits of CBSST) (55). In addition to informing implementation efforts, these factors can be modeled and empirically examined in explanatory trials of the implementation of CBSST into assertive community treatment programs.

Completion of studies applying methods from the first two components can lead to an informed conceptual framework with an a priori set of mediators. Once the pool of relevant mediating targets has been identified, iterative research is needed to facilitate intervention development (i.e., component 3, described next). Methods used to accomplish this third task ultimately contribute to the development of the most promising program, service, or policy initiative that is replete with empirical support and protocols to prepare for component 4 (described further below).

Component 3: Identify Intervention Strategies That Address Target Mechanisms

Generate ideas for intervention strategies (What are the “active ingredients”?).

For each plausible target mediator, the question to be answered becomes, Exactly how does the program, service, or policy address or bring about change in that target? It is crucial for program developers to map the specific content, activities, and processes the program uses or will use to bring about mechanistic change, as well as to identify what communication, messaging, or structural changes are likely to most effectively bring about change (5658). The attention shifts from changing the outcome per se to changing the mechanisms of that outcome which, when changed, should bring about the desired outcome change. This shift in focus is subtle but an important and defining feature of the experimental therapeutics approach.

Conduct feasibility and acceptability studies.

Feasibility and acceptability are crucial to any initiative. Developing new programs or translating efficacy studies (from highly controlled research settings) to routine clinical practice is complex. Small feasibility and acceptability studies can answer many important questions related to recruitment and retention, program or policy execution, acceptability, safety protocols, measurement, and fidelity. These elements are critical in preparing for a rigorous empirical trial.

Feasibility studies need to engage the population of interest in study development and provide time and space to listen, discuss their perspectives, and refine and possibly change intervention activities. The overarching principles in such activities are sometimes referred to as community-based research or community-participatory partnered research (59, 60).

Conduct preliminary impact studies.

Small developmental trials that incorporate random assignment can be used to preliminarily explore mediational chains that are presumed to influence outcomes when sample sizes and resources do not allow for larger, more sophisticated designs. Such preliminary impact studies set the stage for large-scale randomized controlled trials (RCTs) (61). One such type of randomized trial has been called the randomized explanatory trial (RET) (62). RET denotes trials that are “scientific in motivation and aimed at causal understanding” (63) and are contrasted with what are called pragmatic trials that “evaluate therapeutic interventions in practice” (63). The RET concept was introduced >50 years ago, and considerable advancements have been made in trials that seek causal understanding of interventions. The current article provides an update of the RET concept by integrating it with experimental therapeutics and by adding to it modern conceptualizations and methods for trial-based causal analysis. (See the online supplement to this article for elaboration on what some readers may see as a different use of the RET term but which we propose can be useful for mental health services research.) RETs are not only small and exploratory—they can also be large scale (61); we discuss larger trials in the next section.

Importantly, in the context of experimental therapeutics, small-scale pilot RETs inform investigators on whether a program needs refinement. RETs do so by identifying nonsignificant change in the presumed mediators that the program was hypothesized to change. Such nonsignificant change informs investigators of the need to revise program activities aimed at the mediator for which change was not achieved. This information is valuable before embarking on a large and costly trial (64). Pilot RETs also empirically examine whether the presumed mediators are relevant to (or are correlated with) the ultimate outcome as presumed by the intervention designers (61, 64). If a given mediator is found not to be empirically relevant, the team has important information to possibly consider alternative targets for change or to drop that target mechanism. This pilot RET approach is cost-efficient and provides key information for program refinement before costly large-scale trials are pursued.

Component 4: Use Advanced Analytic Methods

Once preliminary research shows initial support for program success, a fully powered trial can further examine the ability of the program to change target mechanisms and to identify which mediators (mechanisms) are most important to outcome change and for whom.

Conduct full-scale RETs.

A full-scale RET examines multiple mediators and moderators (what works for whom) simultaneously. A causal model links the program to the hypothesized mediators, links the mediators to outcomes, and then specifies moderators of both. Testable hypotheses or predictions are made on the basis of this model and then are empirically evaluated to provide perspectives on model viability (61). Such RETs address two core links: whether the program produces change in a given mediator and the strength of the relationship between the mediator and the outcome, thereby providing feedback on why a program works or does not work and how to improve it (65). If either of these links in the mediational chain is broken (i.e., nonsignificant), the broken link must be addressed in program revisions.

To implement a RET, a service team needs to use methods described in the three components discussed above to develop a strong conceptual logic model (65). In RETs, subgroup differences in program effects and mediator relevance are explored. Whereas many scientists define RCTs as a gold standard for evaluation research, we propose that RETs are an intriguing prospect and are worth exploration in mental health services research, because of their capability to simultaneously examine multiple mediators (61). It is not enough for evaluation research to document whether a program works. Rather, we must know how to improve the program. RETs may prove to be an important tool in psychiatric services research to accomplish this goal.

Conduct dismantling studies and multiphase optimization strategies.

When multicomponent interventions are delivered and found to be efficacious, some natural questions arise: Are all components equally effective? Can some components be eliminated, thereby increasing efficiency and reducing costs? One way to address these questions is to conduct dismantling studies (66). A dismantling design is a decomposition of a multicomponent intervention where investigators compare a smaller intervention (with only a subset of components) with the complete intervention. Results are usually reported as a noninferiority trial, where the smaller intervention is tested to see whether it is no less efficacious than the original intervention, by using equivalence testing (67). Additional limbs of such studies can compare subcomponents to one another. Rather than focus exclusively on the analysis of outcomes vis-à-vis an outcome-only perspective, dismantling studies can include mediators to provide insights into the mechanisms through which each component influences (or fails to influence) outcomes.

The multiphase optimization strategy (MOST) is a way to reduce intervention components to a manageable number, not all of which may be active, and to reduce the time involved in serial experiments when assessing component efficacy (68). The MOST methodology involves first screening interventions to identify a smaller set of efficacious components (preparation phase). These components and their intensity or dose are then calibrated by using further experimental designs to arrive at a finalized (smaller) intervention containing the most efficacious subcomponents (optimization phase). This intervention can then be subjected to a two-arm or k-arm RET to establish efficacy (evaluation phase). These trials also can be extended beyond outcome-only thinking to include mediators of each surviving component. MOST designs can be implemented within the types of multilevel contexts commonly encountered in human services settings (69).

Conduct classical mediational analysis.

Numerous analytic frameworks for both mediation and moderation have been elaborated. Many investigators working in human services research settings are unable to conduct RCTs because of ethical or feasibility concerns. These researchers need to find a way to strengthen causal inference of their observational studies. The strategies described in this section along with those in the following sections can help mitigate bias in observational designs. For mediation, initial analytic efforts relied on the modeling of single mediators in regression contexts. Following the early work of Judd and Kenny (70) and Baron and Kenny (71), this approach involves estimating models with and without the presumed mediator, and then examining any differences in coefficients linking the treatment to the outcome in the two scenarios. This approach is sometimes referred to as the “difference” method. A second approach involves obtaining the product of two coefficients generated from two separate models—one that regresses the mediator on the intervention and another that regresses the outcome onto the mediator (while controlling for relevant covariates). Mediation is reflected by multiplying select coefficients across the two regression analyses. Such a coefficient product approach is often based on simplified Sobel-like tests and is referred to as the “product” method (72). These and other so-called traditional methods of mediation represent popular approaches to mediational analysis, especially for clinical trials (73). Two more modern methods of analysis have emerged that are worth considering as alternatives, one based in traditional structural equation modeling (SEM) and the other called causal mediation analysis, as derived from Pearl’s (74) structural causal modeling (SCM) framework.

Use SEM and SCM.

SEM is an elegant multivariate framework for testing whether causal models depicted by influence diagrams are consistent with experimental or observational data generated by research designed to provide perspectives on causal dynamics surrounding interventions (61). The causal model represented by the diagram makes predictions about how the research data should be patterned. If the predictions are borne out, one has increased confidence in the hypothesized causal model. If the predictions are not borne out, the model is rejected. Importantly, SEM can address multiple mediators, causal relationships among mediators, correlated disturbances, measurement error, longitudinal dynamics, interaction effects among mediators as well as between treatments and mediators, and it can handle both linear and nonlinear relationships for variables measured with diverse metrics (e.g., ordinally scaled or binary variables), all while allowing for control of confounders. For examples with RETs, see Jaccard (61) and Jaccard and Bo (65); for an introduction to SEM more generally, see Kline (75) and Hoyle (76). For critiques of SEM, see Bollen and Pearl (77). SEM is a far more powerful method for analyzing mediation than traditional methods based on difference and product coefficient approaches.

SCM is related to SEM but has evolved from different mathematical and statistical traditions. A noteworthy facet of SCM is known as causal mediational analysis (CMA), which is used to model mediators and relies on the potential outcomes framework to conceptualize causality (78). CMA is concerned with, among other things, capturing effects of unobserved variables that influence both the mediator and the outcome (“mediator-outcome confounding”) and with estimating interaction effects between the intervention and the mediator (“exposure-mediator interactions”). CMA uses definitions of direct (i.e., unmediated) and indirect (i.e., mediated) effects and estimates models in ways that permit interactions, nonlinearities, and other potential problems with observational data; for details of this method, see Hicks and Tingley (79), Imai et al. (80), and VanderWeele (81); for critiques of CMA, see Keele (82). A variety of macros in SAS, Stata, and SPSS allow for the fitting of such mediational models for single mediator scenarios, but these models need to be extended to the multiple mediator contexts that typify program evaluation research. Both SCM and CMA, being rooted in a theory of causality, represent robust alternatives to traditional methods of estimating causal effects.

Discussion and Conclusions

Mental health services research could benefit from balancing implementation with a deeper understanding of how programs, services, and policies work, what has been deemed the “science of how” (4). In this article, we have described methods that can be applied to systematically evaluate whether presumed relationships are empirically supported during refining of a program or policy initiative and uncovering of potential mechanisms of change. The approach of focusing on mechanisms of change is also informative for advancing implementation science, as has been articulated by Lewis and colleagues (83). Although experimental therapeutics can slow the research process in terms of pragmatic deliverables, the methods ultimately can help answer the ever-important questions of how and for whom an intervention works, thereby achieving long-term goals of effective interventions more quickly. A key asset of the approach is that it can help inform why certain interventions do not work in certain communities or what unique community-specific mechanisms exist that require targeting. The experimental therapeutics approach challenges the assumption that mechanisms of action are the same within all subgroups of people and accelerates the development of more specific—and more effective—interventions that can help meet the health and social needs of vulnerable populations.

Silver School of Social Work, New York University, New York City (Munson, Raghavan, Rodwin, Jaccard); School of Social Work, University of Alaska, Anchorage (Shimizu).
Send correspondence to Dr. Munson ().

The authors report no financial relationships with commercial interests.

The authors are grateful to their colleagues for reading and commenting on the manuscript.

References

1 Gordon J: An Experimental Therapeutic Approach to Psychosocial Interventions. Bethesda, MD, National Institute of Mental Health, 2017. https://www.nimh.nih.gov/about/director/messages/2017/an-experimental-therapeutic-approach-to-psychosocial-interventions. Accessed Dec 10, 2021Google Scholar

2 Insel TR: The NIMH experimental medicine initiative. World Psychiatry 2015; 14:151–153Crossref, MedlineGoogle Scholar

3 Pintello D: Commentary: establishing scientific rigor and excellence in implementation science training to improve the deployment of evidence-based mental health services. Adm Policy Ment Health 2020; 47:265–271Crossref, MedlineGoogle Scholar

4 Raghavan R, Munson MR, Le C: Toward an experimental therapeutics approach in human services research. Psychiatr Serv 2019; 70:1130–1137LinkGoogle Scholar

5 Ganju V: Implementation of evidence-based practices in state mental health systems: implications for research and effectiveness studies. Schizophr Bull 2003; 29:125–131Crossref, MedlineGoogle Scholar

6 Denyer D, Tranfield D: Producing a systematic review; in The Sage Handbook of Organizational Research Methods. Edited by Buchanan DA, Bryman A. Thousand Oaks, CA, Sage, 2009Google Scholar

7 Kothari BH, Blakeslee J, Miller R: Individual and interpersonal factors associated with psychosocial functioning among adolescents in foster care: a scoping review. Child Youth Serv Rev 2020; 118:105454Crossref, MedlineGoogle Scholar

8 Baumeister H, Reichler L, Munzinger M, et al.: The impact of guidance on Internet-based mental health interventions—a systematic review. Internet Interv 2014; 1:205–215CrossrefGoogle Scholar

9 Bosqui TJ, Marshoud B: Mechanisms of change for interventions aimed at improving the wellbeing, mental health and resilience of children and adolescents affected by war and armed conflict: a systematic review of reviews. Confl Health 2018; 12:15Crossref, MedlineGoogle Scholar

10 Costa R, Colizzi M: The effect of cross-sex hormonal treatment on gender dysphoria individuals’ mental health: a systematic review. Neuropsychiatr Dis Treat 2016; 12:1953–1966Crossref, MedlineGoogle Scholar

11 Lund C, Brooke-Sumner C, Baingana F, et al.: Social determinants of mental disorders and the Sustainable Development Goals: a systematic review of reviews. Lancet Psychiatry 2018; 5:357–369Crossref, MedlineGoogle Scholar

12 Marsh JC, Angell B, Andrews CM, et al.: Client-provider relationship and treatment outcome: a systematic review of substance abuse, child welfare, and mental health services research. J Soc Social Work Res 2012; 3:233–267CrossrefGoogle Scholar

13 Powell BJ, Proctor EK, Glass JE: A systematic review of strategies for implementing empirically supported mental health interventions. Res Soc Work Pract 2014; 24:192–212Crossref, MedlineGoogle Scholar

14 Romano M, Peters L: Evaluating the mechanisms of change in motivational interviewing in the treatment of mental health problems: a review and meta-analysis. Clin Psychol Rev 2015; 38:1–12Crossref, MedlineGoogle Scholar

15 Winsper C, Crawford-Docherty A, Weich S, et al.: How do recovery-oriented interventions contribute to personal mental health recovery? A systematic review and logic model. Clin Psychol Rev 2020; 76:101815Crossref, MedlineGoogle Scholar

16 Jensen PS, Hoagwood K, Trickett EJ: Ivory towers or earthen trenches? Community collaborations to foster real-world research. Appl Dev Sci 1999; 3:206–212CrossrefGoogle Scholar

17 Callard F, Rose D, Wykes T: Close to the bench as well as at the bedside: involving service users in all phases of translational research. Health Expect 2012; 15:389–400Crossref, MedlineGoogle Scholar

18 Isom J, Balasuriya L: Nothing about us without us in policy creation and implementation. Psychiatr Serv 2021; 72:121LinkGoogle Scholar

19 Jones N, Byrne L, Carr S: If not now, when? COVID-19, lived experience, and a moment for real change. Lancet Psychiatry 2020; 7:1008–1009Crossref, MedlineGoogle Scholar

20 Rose D: Participatory research: real or imagined. Soc Psychiatry Psychiatr Epidemiol 2018; 53:765–771Crossref, MedlineGoogle Scholar

21 Beresford P: Public participation in health and social care: exploring the co-production of knowledge. Front Sociol 2019; 3:41CrossrefGoogle Scholar

22 Brown M, Jones N: Service user participation within the mental health system: deepening engagement. Psychiatr Serv 2021; 72:963–965LinkGoogle Scholar

23 Hovmand PS: Group Model Building and Community-Based System Dynamics Process. New York, Springer, 2014CrossrefGoogle Scholar

24 Windsor LC, Benoit E, Pinto RM, et al.: Enhancing behavioral intervention science: using community-based participatory research principles with the multiphase optimization strategy. Transl Behav Med 2021; 11:1596–1605Crossref, MedlineGoogle Scholar

25 Cabassa LJ, Gomes AP, Meyreles Q, et al.: Using the collaborative intervention planning framework to adapt a health-care manager intervention to a new population and provider group to improve the health of people with serious mental illness. Implement Sci 2014; 9:178Crossref, MedlineGoogle Scholar

26 McMillen JC, Lenze SL, Hawley KM, et al.: Revisiting practice-based research networks as a platform for mental health services research. Adm Policy Ment Health 2009; 36:308–321Crossref, MedlineGoogle Scholar

27 Nutting PA, Beasley JW, Werner JJ: Practice-based research networks answer primary care questions. JAMA 1999; 281:686–688Crossref, MedlineGoogle Scholar

28 Westfall JM, VanVorst RF, Main DS, et al.: Community-based participatory research in practice-based research networks. Ann Fam Med 2006; 4:8–14Crossref, MedlineGoogle Scholar

29 Kelly EL, Kiger H, Gaba R, et al.: The Recovery-Oriented Care Collaborative: a practice-based research network to improve care for people with serious mental illnesses. Psychiatr Serv 2015; 66:1132–1134LinkGoogle Scholar

30 Khodyakov D, Stockdale S, Jones A, et al.: On measuring community participation in research. Health Educ Behav 2013; 40:346–354Crossref, MedlineGoogle Scholar

31 Goodman MS, Sanders Thompson VL, Johnson CA, et al.: Evaluating community engagement in research: quantitative measure development. J Community Psychol 2017; 45:17–32Crossref, MedlineGoogle Scholar

32 Goodman MS, Ackermann N, Bowen DJ, et al.: Content validation of a quantitative stakeholder engagement measure. J Community Psychol 2019; 47:1937–1951Crossref, MedlineGoogle Scholar

33 Sandoval JA, Lucero J, Oetzel J, et al.: Process and outcome constructs for evaluating community-based participatory research projects: a matrix of existing measures. Health Educ Res 2012; 27:680–690Crossref, MedlineGoogle Scholar

34 Staniszewska S, Brett J, Simera I, et al.: GRIPP2 reporting checklists: tools to improve reporting of patient and public involvement in research. BMJ 2017; 358:j3453Crossref, MedlineGoogle Scholar

35 Myers N, Sood A, Fox KE, et al.: Decision making about pathways through care for racially and ethnically diverse young adults with early psychosis. Psychiatr Serv 2019; 70:184–190LinkGoogle Scholar

36 Cabassa LJ, Manrique Y, Meyreles Q, et al.: “Treated me . . . like I was family”: qualitative evaluation of a culturally-adapted health care manager intervention for Latinos with serious mental illness and at risk for cardiovascular disease. Transcult Psychiatry 2019; 56:1218–1236Crossref, MedlineGoogle Scholar

37 Padgett DK, Henwood B, Abrams C, et al.: Engagement and retention in services among formerly homeless adults with co-occurring mental illness and substance abuse: voices from the margins. Psychiatr Rehabil J 2008; 31:226–233Crossref, MedlineGoogle Scholar

38 Chinman M, George P, Dougherty RH, et al.: Peer support services for individuals with serious mental illnesses: assessing the evidence. Psychiatr Serv 2014; 65:429–441LinkGoogle Scholar

39 Davidson L, Bellamy C, Guy K, et al.: Peer support among persons with severe mental illnesses: a review of evidence and experience. World Psychiatry 2012; 11:123–128Crossref, MedlineGoogle Scholar

40 Pfeiffer PN, Heisler M, Piette JD, et al.: Efficacy of peer support interventions for depression: a meta-analysis. Gen Hosp Psychiatry 2011; 33:29–36Crossref, MedlineGoogle Scholar

41 Walker G, Bryant W: Peer support in adult mental health services: a metasynthesis of qualitative findings. Psychiatr Rehabil J 2013; 36:28–34Crossref, MedlineGoogle Scholar

42 Sells D, Davidson L, Jewell C, et al.: The treatment relationship in peer-based and regular case management for clients with severe mental illness. Psychiatr Serv 2006; 57:1179–1184LinkGoogle Scholar

43 Gidugu V, Rogers ES, Harrington S, et al.: Individual peer support: a qualitative study of mechanisms of its effectiveness. Community Ment Health J 2015; 51:445–452Crossref, MedlineGoogle Scholar

44 Resnick SG, Rosenheck RA: Integrating peer-provided services: a quasi-experimental study of recovery orientation, confidence, and empowerment. Psychiatr Serv 2008; 59:1307–1314LinkGoogle Scholar

45 Watson D, Stanton K, Clark LA: Self-report indicators of negative valence constructs within the research domain criteria (RDoC): a critical review. J Affect Disord 2017; 216:58–69Crossref, MedlineGoogle Scholar

46 Fishbein M, Ajzen I: Predicting and Changing Behavior: The Reasoned Action Approach. New York, Psychology Press, 2011CrossrefGoogle Scholar

47 Mendoza GA, Prabhu R: Participatory modeling and analysis for sustainable forest management: overview of soft system dynamics models and applications. For Policy Econ 2006; 9:179–196CrossrefGoogle Scholar

48 Hirsch GB, Levine R, Miller RL: Using system dynamics modeling to understand the impact of social change initiatives. Am J Community Psychol 2007; 39:239–253Crossref, MedlineGoogle Scholar

49 Haroz EE, Fine SL, Lee C, et al.: Planning for suicide prevention in Thai refugee camps: using community-based system dynamics modeling. Asian Am J Psychol 2021; 12:193–203Crossref, MedlineGoogle Scholar

50 Noubani A, Diaconu K, Ghandour L, et al.: A community-based system dynamics approach for understanding factors affecting mental health and health seeking behaviors in Beirut and Beqaa regions of Lebanon. Global Health 2020; 16:28Crossref, MedlineGoogle Scholar

51 Trani J-F, Ballard E, Bakhshi P, et al.: Community based system dynamic as an approach for understanding and acting on messy problems: a case study for global mental health intervention in Afghanistan. Confl Health 2016; 10:25Crossref, MedlineGoogle Scholar

52 Trochim W, Kane M: Concept mapping: an introduction to structured conceptualization in health care. Int J Qual Health Care 2005; 17:187–191Crossref, MedlineGoogle Scholar

53 Kane M, Trochim WM: Concept Mapping for Planning and Evaluation. Thousand Oaks, CA, Sage, 2007CrossrefGoogle Scholar

54 Onken SJ: Mental health consumer concept mapping of supportive community. Eval Program Plann 2018; 71:36–45Crossref, MedlineGoogle Scholar

55 Sommerfeld DH, Aarons GA, Naqvi JB, et al.: Stakeholder perspectives on implementing cognitive behavioral social skills training on assertive community treatment teams. Adm Policy Ment Health 2019; 46:188–199Crossref, MedlineGoogle Scholar

56 Moore GF, Audrey S, Barker M, et al.: Process evaluation of complex interventions: Medical Research Council guidance. BMJ 2015; 350:h1258Crossref, MedlineGoogle Scholar

57 O’Cathain A, Croot L, Duncan E, et al.: Guidance on how to develop complex interventions to improve health and healthcare. BMJ Open 2019; 9:e029954Crossref, MedlineGoogle Scholar

58 Munson MR, Jaccard J: Mental health service use among young adults: a communication framework for program development. Adm Policy Ment Health 2018; 45:62–80Crossref, MedlineGoogle Scholar

59 Jones L, Wells K: Strategies for academic and clinician engagement in community-participatory partnered research. JAMA 2007; 297:407–410Crossref, MedlineGoogle Scholar

60 Munson MR, Cole A, Jaccard J, et al.: An engagement intervention for young adults with serious mental health conditions. J Behav Health Serv Res 2016; 43:542–563Crossref, MedlineGoogle Scholar

61 Jaccard J: Program Evaluation and Randomized Explanatory Trials: A Structural Equation Modeling Approach. Miami, Applied Scientific Analysis, 2022Google Scholar

62 Schwartz D, Lellouch J: Explanatory and pragmatic attitudes in therapeutical trials. J Chronic Dis 1967; 20:637–648Crossref, MedlineGoogle Scholar

63 Charlton BG: Understanding Randomized Controlled Trials: Explanatory or Pragmatic? Oxford, Oxford University Press, 1994Google Scholar

64 Munson MR, Jaccard J, Scott LD Jr, et al.: Outcomes of a metaintervention to improve treatment engagement among young adults with serious mental illnesses: application of a pilot randomized explanatory design. J Adolesc Health 2021; 69:790–796Crossref, MedlineGoogle Scholar

65 Jaccard J, Bo A: Prevention science and child/youth development: randomized explanatory trials for integrating theory, method, and analysis in program evaluation. J Soc Social Work Res 2018; 9:651–687CrossrefGoogle Scholar

66 Papa A, Follette WC: Dismantling studies of psychotherapy; in The Encyclopedia of Clinical Psychology. Edited by Cautin RL, Lilienfeld SO. Hoboken, NJ, Wiley, 2015CrossrefGoogle Scholar

67 Wellek S: Testing Statistical Hypotheses of Equivalence and Noninferiority. Boca Raton, FL, CRC Press, 2010CrossrefGoogle Scholar

68 Collins LM: Optimization of Behavioral, Biobehavioral, and Biomedical Interventions: The Multiphase Optimization Strategy (MOST). New York, Springer, 2018Google Scholar

69 Collins LM, Murphy SA, Strecher V: The multiphase optimization strategy (MOST) and the sequential multiple assignment randomized trial (SMART): new methods for more potent eHealth interventions. Am J Prev Med 2007; 32(suppl):S112–S118Crossref, MedlineGoogle Scholar

70 Judd CM, Kenny DA: Process analysis: estimating mediation in treatment evaluations. Eval Rev 1981; 5:602–619CrossrefGoogle Scholar

71 Baron RM, Kenny DA: The moderator-mediator variable distinction in social psychological research: conceptual, strategic, and statistical considerations. J Pers Soc Psychol 1986; 51:1173–1182Crossref, MedlineGoogle Scholar

72 MacKinnon D: Introduction to Statistical Mediation Analysis. New York, Taylor & Francis, 2008Google Scholar

73 Vo T-T, Superchi C, Boutron I, et al.: The conduct and reporting of mediation analysis in recently published randomized controlled trials: results from a methodological systematic review. J Clin Epidemiol 2020; 117:78–88Crossref, MedlineGoogle Scholar

74 Pearl J: Causal inference in statistics: an overview. Stat Surv 2009; 3:96–146CrossrefGoogle Scholar

75 Kline RB: Principles and Practice of Structural Equation Modeling. New York, Guilford Press, 2015Google Scholar

76 Hoyle RH (ed): Handbook of Structural Equation Modeling. New York, Guilford Press, 2012Google Scholar

77 Bollen KA, Pearl J: Eight myths about causality and structural equation models; in Handbook of Causal Analysis for Social Research. Edited by Morgan S. New York, Springer, 2013CrossrefGoogle Scholar

78 Holland PW: Statistics and causal inference. J Am Stat Assoc 1986; 81:945–960CrossrefGoogle Scholar

79 Hicks R, Tingley D: Causal mediation analysis. Stata J 2011; 11:605–619CrossrefGoogle Scholar

80 Imai K, Keele L, Tingley D: A general approach to causal mediation analysis. Psychol Methods 2010; 15:309–334Crossref, MedlineGoogle Scholar

81 VanderWeele T: Explanation in Causal Inference: Methods for Mediation and Interaction. Oxford, Oxford University Press, 2015Google Scholar

82 Keele L: Causal mediation analysis: warning! Assumptions ahead. Am J Eval 2015; 36:500–513CrossrefGoogle Scholar

83 Lewis CC, Klasnja P, Powell BJ, et al.: From classification to causality: advancing understanding of mechanisms of change in implementation science. Front Public Health 2018; 6:136Crossref, MedlineGoogle Scholar