The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
ArticlesFull Access

The Humble Leader: Association of Discrepancies in Leader and Follower Ratings of Implementation Leadership With Organizational Climate in Mental Health

Abstract

Objectives:

Discrepancies, or perceptual distance, between leaders’ self-ratings and followers’ ratings of the leader are common but usually go unrecognized. Research on discrepancies is limited, but there is evidence that discrepancies are associated with organizational context. This study examined the association of leader-follower discrepancies in Implementation Leadership Scale (ILS) ratings of mental health clinic leaders and the association of those discrepancies with organizational climate for involvement and performance feedback. Both involvement and performance feedback are important for evidence-based practice (EBP) implementation in mental health.

Methods:

A total of 593 individuals—supervisors (leaders, N=80) and clinical service providers (followers, N=513)—completed surveys that included ratings of implementation leadership and organizational climate. Polynomial regression and response surface analyses were conducted to examine the associations of discrepancies in leader-follower ILS ratings with organizational involvement climate and performance feedback climate, aspects of climate likely to support EBP implementation.

Results:

Both involvement climate and performance feedback climate were highest where leaders rated themselves low on the ILS and their followers rated those leaders high on the ILS (“humble leaders”).

Conclusions:

Teams with “humble leaders” showed more positive organizational climate for involvement and for performance feedback, contextual factors important during EBP implementation and sustainment. Discrepancy in leader and follower ratings of implementation leadership should be a consideration in understanding and improving leadership and organizational climate for mental health services and for EBP implementation and sustainment in mental health and other allied health settings.

There is increasing demand for the use of public health interventions supported by rigorous scientific research, but frequently the promise of such evidence-based practices (EBPs) fails to translate into their effective implementation, sustained use, or intended public health benefits. To bridge this gap between research and effective delivery in practice, researchers increasingly recognize the importance of studying the process of EBP implementation and sustainment (14). Although individual provider factors contribute to successful EBP implementation (5), organizational factors are likely to have an equal or greater influence on EBP implementation (6,7). Leadership is one factor that has been suggested to play an important role in the organizational context and implementation of health innovations (810).

Organizational climate that supports EBP implementation and sustainment can facilitate implementation, and leadership is an antecedent of organizational culture and climate (1117). For example, more positive leadership is associated with a climate of involvement, in which followers feel involved in problem solving and organizational decision making (18). Leaders who emphasize the importance of learning and who establish trust with their followers foster development of a positive feedback climate, which encourages receipt of formal and informal performance feedback (19). Leader “credibility” has also been identified as an important facet of feedback climate, because leaders should be knowledgeable about their followers’ assigned tasks, in order to accurately judge performance on those tasks (20).

Early research on leadership and implementation focused on general leadership constructs, such as transformational leadership (21,22). Leaders enact transformational leadership through behaviors that embody inspirational motivation, individualized consideration of followers, ability to engender buy-in and intellectual stimulation, and idealized influence or serving as a role model (23). However, research on developing specific types of climates, such as safety climate (24,25) and service climate (26), has increasingly considered leadership focused on the achievement of a specific strategic outcome—for example, reducing accidents and improving customer service, respectively. Such a strategic leadership approach can also be applied to EBP implementation in the form of implementation leadership (27).

Implementing EBPs can be incredibly challenging and requires specific leader attributes, such as being knowledgeable about EBPs, engaging in proactive problem solving, perseverance in the face of implementation challenges, and supporting service providers in the implementation process. The Implementation Leadership Scale (ILS) was developed as a pragmatic, brief, and efficient (3,28,29) measure to assess these leadership behaviors that are thought to promote a strategic climate for implementing and sustaining EBPs (27). The construct of implementation leadership is complementary to general leadership and is the focus of this study, which involved “first-level leaders” and their followers. Although the “follower” label may have a negative or narrow connotation in some instances, the relatively nascent work on “followership” has begun to shift from this negative connotation to one that views followers as proactively involved in the leadership process. Therefore, we argue that without the concept of followers or followership, it is difficult to fully understand the leadership process, and thus the use of the terms “leader” and “follower” are appropriate for this study (30,31). First-level leaders (that is, those who supervise others who provide direct services) may be particularly influential in supporting new practices because they are on the front line directly supervising clinicians and bridging organizational imperatives and clinical service provision as EBPs are integrated into daily work routines (32). However, leaders and followers do not always agree about the leader’s behavior.

Research comparing leader and follower leadership ratings has focused on agreement and outcomes related to agreement. For example, Atwater and Yammarino’s (33) model of leader-follower agreement posits that congruence in positive leadership ratings are more likely linked to positive outcomes, and conversely, leader-follower agreement in negative leadership ratings are linked to negative outcomes. For leaders who under- or overestimate their own leadership abilities and skills, findings are equivocal. For example, one set of studies found that leaders who rated themselves lower in relation to others’ ratings of them were considered to be more effective as leaders (34,35). Other studies have shown that leaders who overestimate their leadership abilities tend to use hard persuasion tactics, such as pressure, to influence followers (36). Followers of such leaders are likely to think unfavorably of such tactics and recognize the leaders’ erroneous evaluation of their own strengths. Moreover, leaders who overestimate their leadership behaviors tend to misdiagnose their strengths, adversely affecting their effectiveness as a leader (33). Although these studies have added to an understanding of the different types of disagreement, there has been limited research specifically focusing on perceptual distance, or discrepancy in ratings of leadership and its effect on outcomes, such as organizational climate. This is an important area of inquiry, because recent work has shown that mental health leader-follower discrepancies in transformational leadership ratings can negatively affect organizational culture (37).

This study, conducted in public mental health organizations, addressed the extent to which leader-follower discrepancies in leadership ratings are related to the organizational climate of the leaders’ units, particularly with regard to organizational climate for involvement and performance feedback. Climate for involvement is important because EBP implementation requires participation and buy-in across organizational levels, especially for clinicians and service providers. Indeed, congruence of leadership across multiple levels may also be important during implementation (10). Climate for performance feedback is also critically important for EBP implementation, in that feedback and coaching regarding intervention fidelity is a critical part of implementation of many EBPs. For example, in previous work in home-based services, a key implementation strategy was providing feedback through in-vivo coach observation and real-time feedback (38,39). Thus it is important to understand how implementation leadership affects the organizational climate of involvement and feedback.

The purpose of this study was to examine the association of discrepancy between leaders’ (that is, clinic supervisors) self-ratings and their followers’ (that is, clinical service providers) ratings on the ILS and the associations of discrepancy with involvement and performance feedback climate in the leaders’ teams. On the basis of past research showing that leaders who underestimate their leadership may be more effective (34,35), we hypothesized that discrepancies—when leaders rated themselves lower than their follower ratings of them—would be associated with a more positive climate for involvement and performance feedback.

Methods

Participants

Participants were 753 public mental health team leaders (leaders) and the service providers whom they supervised (followers) from 31 mental health service organizations in California. Of the 753 eligible participants, 593 (80 leaders and 513 providers) completed the measures that were used in these analyses (79% response rate).

Data Collection Procedures

This study was conducted from approximately January 2013 to December 2014. The research team first obtained permission from agency executive directors or their designees to recruit leaders and their followers for participation in the study. Eligible leaders were identified as those who directly supervise staff in mental health treatment teams or work groups. Data collection was completed by using an online survey or in person with a paper-and-pencil survey. For online surveys, each participant received a link to the Web survey and a unique password via e-mail. For in-person surveys, paper forms were provided and completed at team meetings. In previous research, we found no differences in ILS scores by method of survey administration (40). The survey took approximately 20–40 minutes to compete, and participants received incentives by e-mail following survey completion. The Institutional Review Board of San Diego State University approved this study. Participation was voluntary, and informed consent was obtained from all participants.

Measures

ILS.

The ILS includes 12 items scored on a scale of 0, not at all, to 4, to a very great extent (27). The ILS includes four subscales, proactive leadership (Cronbach’s α=.93), knowledgeable leadership (α=.95), supportive leadership (α=.90), and perseverant leadership (α=.93). The total ILS score (α=.95) was created by computing and averaging the mean of the four subscales. The complete ILS measure and scoring instructions can be found in the “additional files” associated with the original scale development study (27). Leaders completed self-ratings of implementation leadership, and followers completed ratings of their leader’s implementation leadership.

Organizational Climate Measure.

The Organizational Climate Measure (OCM) consists of 17 subscales, with items scored on a scale of 0, definitely false, to 3, definitely true, and capturing a number of organizational climate dimensions (41). In this study, we used the involvement (α=.87; six items) and performance feedback (α=.79; five items) subscales that measure potentially important aspects of organizational climate for implementation. Clinicians completed these OCM subscales.

Statistical Analyses

Follower ratings were aggregated to create a team-level rating of implementation leadership for each leader. Intraclass correlation coefficients and average within-group agreement statistics (42) supported the aggregation of team ratings to the unit level (average within-group agreement >.70). As in the study by Fleenor and colleagues (43) and as recommended by Shanock and colleagues (44), scores were standardized, and scores that differed by ≥.5 standard deviations were considered discrepant values.

To explore the relationship between discrepancies in leadership ratings and organizational climate (that is, involvement climate and performance feedback climate), we conducted polynomial regressions and response surface analyses (4446). As in past research that used this technique, we focused on the slope and curvature along the y=x and y=−x axes of the response surface, because they correspond directly to the substantive research questions of interest. The y=x axis is the axis along which follower and leader ratings are congruent, whereas the y=−x axis is the axis along which follower and leader ratings are incongruent. The relationship between organizational climate and either congruence or incongruence of ILS ratings was then explored by examining the response surfaces of the alignment between leader and follower ratings of implementation leadership and associations with organizational climate.

Results

Table 1 provides demographic information about the leaders and providers. Means, standard deviations, and correlations among the study variables included in the discrepancy analyses are presented in Table 2. Before polynomial regression and response surface analyses for examining discrepancies were conducted, ILS data were analyzed to ensure that discrepancies existed in the data (44). Three groups were identified: 31% (N=33) of leaders rated themselves higher than their followers rated them; for 33% (N=35) of leaders, ratings were in agreement with their followers’ ratings of them; and 36% (N=38) of leaders rated themselves lower than their followers rated them. Thus over 65% (N=71) of the sample showed discrepancies.

TABLE 1. Characteristics of supervisors (leaders) and clinical service providers (followers)

CharacteristicLeaders (N=80)Followers (N=513)
N%N%
Age (M±SD)45.4±9.937.3±9.5
Years of experience (M±SD)13.8±7.66.2±5.1
Years in agency (M±SD)5.9±4.63.2±2.9
Gender
 Male202511923
 Female607539477
Race-ethnicity
 Caucasian577121444
 African American348517
 Asian American911286
 Other race-ethnicity111416534
 Hispanic101321442
Education level
 High school only143
 Some college34489
 Bachelor’s degree1111723
 Some graduate work11387
 Master’s degree698629057
 Doctoral degree6861
Major of highest degree
 Marriage and family therapy202510822
 Social work455611
 Psychology23347
 Child development23316
 Human relations384814429
 Other141711824

TABLE 1. Characteristics of supervisors (leaders) and clinical service providers (followers)

Enlarge table

TABLE 2. Scores on the Implementation Leadership Scale (ILS) and the Organizational Climate Measure (OCM) and correlations between variablesa

VariableMSD1234
1. Provider ILS ratings2.42.79
2. Leader ILS ratings2.39.92.17
3. OCM involvement1.81.48.27**–.06
4. OCM performance feedback1.95.43.31**–.09.64**

aPossible ILS mean scores for both leaders and providers range from 0 to 4, with higher scores indicating more positive implementation leadership. Possible OCM mean scores on dimensions of involvement and performance feedback climate range from 0 to 3, with higher scores indicating more positive climate for either involvement or performance feedback, respectively.

**p<.01

TABLE 2. Scores on the Implementation Leadership Scale (ILS) and the Organizational Climate Measure (OCM) and correlations between variablesa

Enlarge table

Results for the polynomial regression for associations between discrepancy on the ILS and the OCM involvement subscale are provided in a detailed table in the online supplement. The response surface is depicted in Figure 1. The line of incongruence (the dashed line in Figure 1) had a significant slope (a3=−.30, t=−3.15, df=77, p<.01) and curvature (a4=.42, t=3.47, df=77, p<.01). The significant slope indicates that involvement climate scores were higher when leader ILS ratings were low and follower ILS ratings were high, and this is in contrast to where leader ILS ratings were high and follower ILS ratings were low. Thus involvement climate was affected by discrepancy differently, depending on whose ILS rating was more favorable (that is, direction of discrepancy matters). The significant positive curvature (that is, convex surface) shown in Figure 1 shows that involvement climate scores were higher as levels of discrepancy increased.

FIGURE 1.

FIGURE 1. Response surface for involvement climate predicted from the discrepancy between leader and staff ratings on the Implementation Leadership Scale (ILS)a

aOn the Organizational Climate Measure (OCM) possible mean scores for the involvement dimension range from 0 to 3, with higher scores indicating more positive organizational climate for involvement. Scores for both leaders and providers on the ILS range from 0 to 4, with higher scores indicating more positive implementation leadership. Although the OCM mean subscale scores range from 0 to 3, the predicated range on the y axis is from 1 to 4 in these analyses.

With regard to the line of congruence (the solid line), the slope was nonsignificant (a1=.12, p=.22), indicating that involvement climate scores were not different when leaders and followers agreed that ILS levels were high compared with when they agreed that ILS levels were low. However, the curvature of the line of congruence was significant, indicating that the lowest levels of involvement occurred when there was agreement between leaders and followers regarding ILS ratings of the leader. As a follow-up analysis to clarify the nature of the findings, we compared the four corner points of the response surface, in line with recommendations of Lee and Antonakis (47). This analysis revealed that involvement was highest at the left corner of the response surface (labeled A in Figure 1), where leaders rated themselves low and followers rated the leader high on the ILS. As summarized in the table in the online supplement, point A was significantly higher than all other corners of the surface (points B, C, and D), and the other three points were not different from each other.

The table in the online supplement provides the detailed polynomial regression results for the associations between discrepancy on the ILS and the OCM performance feedback climate subscale, and the corresponding response surface is provided in Figure 2. The line of incongruence (the dashed line in Figure 2) had a significant slope (a3=−.37, t=−5.15, df=77, p<.001) and curvature (a4=.47, t=6.29, df=77, p<.001). The significant slope indicates that performance feedback scores were higher when leader ILS ratings were low and follower ILS ratings were high compared with when leader ILS ratings were high and follower ILS ratings were low. Thus performance feedback climate was affected by discrepancy differently, depending on whose ILS rating was more favorable (that is, direction of discrepancy matters). The significant, positive curvature (that is, convex surface) indicates that performance feedback climate scores were higher as levels of discrepancy increased.

FIGURE 2.

FIGURE 2. Response surface for performance feedback climate predicted from the discrepancy between leader and staff ratings on the Implementation Leadership Scale (ILS)a

aOn the Organizational Climate Measure (OCM) possible mean scores for the performance feedback dimension range from 0 to 3, with higher scores indicating more positive climate for performance feedback. Mean scores for both leaders and providers on the ILS range from 0 to 4, with higher scores indicating more positive implementation leadership. Although the OCM mean subscale scores range from 0 to 3, the predicated range on the y axis is from 1 to 4.5 in these analyses.

With regard to the line of congruence (the solid line), the slope was also significant (a1=.16, t=2.25, df=76, p<.05), meaning that performance feedback scores were different when leaders and followers agreed that ILS scores were high versus when they agreed that ILS scores were low. Likewise, the curvature of the line of congruence was significant, indicating that the lowest levels of feedback climate occurred when there was agreement regarding ILS scores. As in the follow-up analysis conducted for involvement, we compared the four corner points of the response surface (47). This analysis revealed that performance feedback climate was highest at the left corner of the response surface (labeled A in Figure 2), where followers ILS ratings were high and leaders rated themselves low. As summarized in Table 3, point A was significantly higher than all other corners of the surface (points B, C, and D), point B was significantly higher than point D, and none of the other comparisons were significant.

TABLE 3. Tests of equality between predicted values for response surfaces for involvement and performance feedback on the Organizational Climate Measure (OCM)a

Response surface featureInvolvementPerformance feedback
PointPredicted value
 A3.844.33
 B2.672.95
 C2.642.84
 D2.202.30
Along edges of surfaceTest of equalityb
 A vs. B7.09*11.23***
 B vs. C.01.07
 C vs. D.981.66
 D vs. A11.33***20.83***
Along diagonal lines
 A vs. C10.38***26.42***
 B vs. D2.709.17*

aSee Figure 1 and Figure 2 for results of response surface analyses.

bValues are F statistics (df=1 and 74).

*p<.05, ***p<.001

TABLE 3. Tests of equality between predicted values for response surfaces for involvement and performance feedback on the Organizational Climate Measure (OCM)a

Enlarge table

Discussion

We found three almost equally distributed discrepancy-agreement groups: leaders and followers who agreed, leaders who rated themselves higher than did their followers, and leaders who rated themselves lower than did their followers. We refer to the latter as “humble leaders.” Organizational climates for involvement and feedback were most positive for humble leaders. These findings are consistent with research examining general leadership in other settings (34,35) and support the effectiveness of humble leaders (48,49). Moreover, discrepancies were associated with two aspects of organizational climate likely to be important for EBP implementation and sustainment.

Humble leadership was associated with significantly higher involvement and performance feedback climates in contrast to leadership characterized by high self-ratings and low follower ratings. This finding suggests that this leader-follower dynamic, in which leaders rate themselves lower than do their followers, creates a more positive climate that supports the leader’s capacity to implement EBPs. For example, leader humility has been found to be associated with increased humble behaviors of followers and the development of a shared team process that supports team goal achievement (50). However, the presence of humble leadership does not necessarily mean that EBPs will be implemented effectively. It is likely that effective leadership is a necessary but not sufficient condition for effective implementation and that leadership is one component of organizational capacity for implementation (51). Further research is needed to better understand the nuances of how leader-follower discrepancies develop and influence follower experiences of their workplace and to examine additional factors that may have an impact on effective implementation for both leaders and followers. Qualitative or mixed-methods might be used to better understand leader and follower perceptions of leadership and their relationships to implementation climate (52) and to advance leadership and climate improvement strategies.

There are promising interventions for improving leadership and organizational context for implementation. The Leadership and Organizational Change for Implementation intervention (53) combines principles of transformational leadership with implementation leadership to train first-level leaders to develop a more positive EBP implementation climate in their teams while working with organizations to ensure the availability of organizational processes and supports (for example, fidelity feedback, educational materials, and coaching) for effective implementation. Another example is the ARC (availability, responsiveness, continuity) implementation strategy, which works across organizational levels to improve molar organizational culture and climate (54). In another approach, Zohar and Polacheck (55) demonstrated that providing feedback to leaders about their followers’ perceptions of the leader’s team’s safety climate affected leaders’ verbalizations and behaviors, organizational safety climate, and safety outcomes. Thus there may be multiple strategies (some extremely low cost and low burden) that can be employed to influence leader cognition and behavior and ultimately improve organizational context and strategic outcomes.

There is a need for brief and pragmatic measures to guide leader development, with the goal of changing strategic climate and improving implementation (56). Leader self-ratings can be compared with provider ratings of the leader to provide insight to leaders about the degree to which their own perspective is aligned with that of their followers. Thus the ILS can be used by health care and allied health care organizations so that leadership for EBP implementation can be assessed at any stage of the implementation process as outlined in the exploration, preparation, implementation, or sustainment implementation framework (1). In the early implementation phases (for example, exploration and preparation), leaders might be provided training in effective leadership to support EBP implementation. Such an implementation strategy could contribute to facilitating the implementation process.

Some limitations of this study should be noted. First, this study focused on organizational climate supportive of implementation context as the distal outcome. Future studies of implementation leadership should examine additional outcomes, such as implementation effectiveness, innovation effectiveness, and patient outcomes (57,58). Second, this study was conducted in mental health organizations. Generalizability of these findings should be examined through replication in other health and allied health service sectors. Third, in this study there were apparent differences in race-ethnicity distribution for the samples of leaders and followers. There have been calls for leadership research to examine the degree to which such differences affect perceptions, relative power, and causality (59). Although it was beyond the purview of this study, we recommend more detailed examination of these issues. Finally, the data were cross-sectional; future research should examine these relationships prospectively, in addition to examining whether leader interventions may affect leader-follower discrepancies.

Conclusions

Effective EBP implementation and sustainment is critical to improve the impact of effective interventions. Sadly, many implementation efforts fail or do not deliver interventions with the needed rigor or fidelity. It is critical to understand how health care organization leaders and providers interact to create an organizational climate conducive to effective implementation and sustainment. This study demonstrated that discrepancy, or perceptual distance, in regard to organizational leadership has an impact on organizational climate relevant for EBP implementation. Leadership and organizational interventions to improve implementation and sustainment should be further developed and tested in order to advance implementation science and improve the public health impact of investments in clinical intervention development and implementation.

Dr. Aarons, Ms. Torres, and Ms. Finn are with the Department of Psychiatry, University of California, San Diego, La Jolla (e-mail: ). They are also with the Child and Adolescent Services Research Center, San Diego. Dr. Ehrhart is with the Department of Psychology, San Diego State University, San Diego. Dr. Beidas is with the Department of Psychiatry, University of Pennsylvania, Philadelphia.

This work was presented at the Annual Conference on the Science of Dissemination and Implementation, held in Bethesda, Maryland, December 8–9, 2014.

This study was supported by National Institute of Mental Health grants R21MH098124 (Dr. Ehrhart, principal investigator), R21MH082731 and R01MH072961 (Dr. Aarons, principal investigator), K23MH099179 (Dr. Beidas, principal investigator), P30MH074678 (John A. Landsverk, Ph.D., principal investigator), and R25MH080916 (Enola K. Proctor, Ph.D., principal investigator).

Dr. Beidas reports receipt of royalties from Oxford University Press. The other authors report no financial relationships with commercial interests.

The authors thank the community-based organizations, leaders, and clinicians that made this study possible.

References

1 Aarons GA, Hurlburt M, Horwitz SM: Advancing a conceptual model of evidence-based practice implementation in public service sectors. Administration and Policy in Mental Health and Mental Health Services Research 38:4–23, 2011Crossref, MedlineGoogle Scholar

2 Damschroder LJ, Aron DC, Keith RE, et al.: Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implementation Science 4:50–64, 2009Crossref, MedlineGoogle Scholar

3 Proctor EK, Landsverk J, Aarons G, et al.: Implementation research in mental health services: an emerging science with conceptual, methodological, and training challenges. Administration and Policy in Mental Health and Mental Health Services Research 36:24–34, 2009Crossref, MedlineGoogle Scholar

4 Bammer G: Integration and implementation sciences: building a new specialization. Ecology and Society 2:6, 2005CrossrefGoogle Scholar

5 Aarons GA: Mental health provider attitudes toward adoption of evidence-based practice: the Evidence-Based Practice Attitude Scale (EBPAS). Mental Health Services Research 6:61–74, 2004Crossref, MedlineGoogle Scholar

6 Jacobs JA, Dodson EA, Baker EA, et al.: Barriers to evidence-based decision making in public health: a national survey of chronic disease practitioners. Public Health Reports 125:736–742, 2010Crossref, MedlineGoogle Scholar

7 Beidas RS, Marcus S, Aarons GA, et al.: Predictors of community therapists’ use of therapy techniques in a large public mental health system. JAMA Pediatrics 169:374–382, 2015Crossref, MedlineGoogle Scholar

8 Bass BM: Leadership and Performance Beyond Expectations. New York, Free Press, 1985Google Scholar

9 Bass BM, Avolio BJ: The implications of transformational and transactional leadership for individual, team, and organizational development; in Research in Organizational Change and Development. Edited by Pasmore W, Woodman RW. Greenwich, Conn, JAI Press, 1990Google Scholar

10 Aarons GA, Ehrhart MG, Farahnak LR, et al.: Aligning leadership across systems and organizations to develop a strategic climate for evidence-based practice implementation. Annual Review of Public Health 35:255–274, 2014Crossref, MedlineGoogle Scholar

11 Aarons GA, Sawitzky AC: Organizational culture and climate and mental health provider attitudes toward evidence-based practice. Psychological Services 3:61–72, 2006Crossref, MedlineGoogle Scholar

12 Aarons GA, Sawitzky AC: Organizational climate partially mediates the effect of culture on work attitudes and staff turnover in mental health services. Administration and Policy in Mental Health and Mental Health Services Research 33:289–301, 2006Crossref, MedlineGoogle Scholar

13 Ehrhart MG, Schneider B, Macey WH: Organizational Climate and Culture: An Introduction to Theory, Research, and Practice. New York, Routledge, 2014Google Scholar

14 Ehrhart MG: Leadership and procedural justice climate as antecedents of unit-level organizational citizenship behavior. Personnel Psychology 57:61–94, 2004CrossrefGoogle Scholar

15 Litwin G, Stringer R: Motivation and Organizational Climate. Cambridge, Mass, Harvard University Press, 1968Google Scholar

16 Zohar D, Tenne-Gazit O: Transformational leadership and group interaction as climate antecedents: a social network analysis. Journal of Applied Psychology 93:744–757, 2008Crossref, MedlineGoogle Scholar

17 Tsui AS, Zhang Z-X, Wang H, et al.: Unpacking the relationship between CEO leadership behavior and organizational culture. Leadership Quarterly 17:113–137, 2006CrossrefGoogle Scholar

18 Richardson HA, Vandenberg RJ: Integrating managerial perceptions and transformational leadership into a work‐unit level model of employee involvement. Journal of Organizational Behavior 26:561–589, 2005CrossrefGoogle Scholar

19 Baker A, Perreault D, Reid A, et al.: Feedback and organizations: feedback is good, feedback-friendly culture is better. Canadian Psychology 54:260–268, 2013CrossrefGoogle Scholar

20 Steelman LA, Levy PE, Snell AF: The Feedback Environment Scale: construct definition, measurement, and validation. Educational and Psychological Measurement 64:165–184, 2004CrossrefGoogle Scholar

21 Michaelis B, Stegmaier R, Sonntag K: Shedding light on followers’ innovation implementation behavior: the role of transformational leadership, commitment to change, and climate for initiative. Journal of Managerial Psychology 25:408–429, 2010CrossrefGoogle Scholar

22 Aarons GA, Sommerfeld DH: Leadership, innovation climate, and attitudes toward evidence-based practice during a statewide implementation. Journal of the American Academy of Child and Adolescent Psychiatry 51:423–431, 2012Crossref, MedlineGoogle Scholar

23 Bass BM, Avolio BJ: MLQ: Multifactor Leadership Questionnaire. Technical Report. Binghamton, NY, Binghamton University, Center for Leadership Studies, 1995Google Scholar

24 Barling J, Loughlin C, Kelloway EK: Development and test of a model linking safety-specific transformational leadership and occupational safety. Journal of Applied Psychology 87:488–496, 2002Crossref, MedlineGoogle Scholar

25 Zohar D: Modifying supervisory practices to improve subunit safety: a leadership-based intervention model. Journal of Applied Psychology 87:156–163, 2002Crossref, MedlineGoogle Scholar

26 Schneider B, Ehrhart MG, Mayer DM, et al.: Understanding organization-customer links in service settings. Academy of Management Journal 48:1017–1032, 2005CrossrefGoogle Scholar

27 Aarons GA, Ehrhart MG, Farahnak LR: The Implementation Leadership Scale (ILS): development of a brief measure of unit level implementation leadership. Implementation Science 9:45, 2014Crossref, MedlineGoogle Scholar

28 Martinez RG, Lewis CC, Weiner BJ: Instrumentation issues in implementation science. Implementation Science 9:118, 2014Crossref, MedlineGoogle Scholar

29 Glasgow RE, Riley WT: Pragmatic measures: what they are and why we need them. American Journal of Preventive Medicine 45:237–243, 2013Crossref, MedlineGoogle Scholar

30 Van Vugt M, Hogan R, Kaiser RB: Leadership, followership, and evolution: some lessons from the past. American Psychologist 63:182–196, 2008Crossref, MedlineGoogle Scholar

31 Kellerman B: Followership: How Followers Are Creating Change and Changing Leaders. Boston, Harvard Business School Press, 2008Google Scholar

32 Priestland A, Hanig R: Developing first-level leaders. Harvard Business Review 83:112–120, 150, 2005MedlineGoogle Scholar

33 Atwater LE, Yammarino FJ: Self–other rating agreement; in Research in Personnel and Human Resources Management. Edited by Ferris GR. Greenwich, Conn, JAI Press, 1997Google Scholar

34 Van Velsor E, Taylor S, Leslie J: An examination of the relationships among self-perception accuracy, self-awareness, gender, and leader effectiveness. Human Resource Management 32:249–264, 1993CrossrefGoogle Scholar

35 Atwater LE, Roush P, Fischthal A: The influence of upward feedback on self and follower ratings of leadership. Personnel Psychology 48:35–59, 1995CrossrefGoogle Scholar

36 Berson Y, Sosik JJ: The relationship between self-other rating agreement and influence tactics and organizational processes. Group and Organization Management 32:675–698, 2007CrossrefGoogle Scholar

37 Aarons GA, Ehrhart MG, Farahnak LR, et al.: Discrepancies in leader and follower ratings of transformational leadership: relationships with organizational culture in mental health. Administration and Policy in Mental Health and Mental Health Services Research, in pressGoogle Scholar

38 Aarons GA, Sommerfeld DH, Hecht DB, et al.: The impact of evidence-based practice implementation and fidelity monitoring on staff turnover: evidence for a protective effect. Journal of Consulting and Clinical Psychology 77:270–280, 2009Crossref, MedlineGoogle Scholar

39 Chaffin M, Hecht D, Bard D, et al.: A statewide trial of the SafeCare home-based services model with parents in Child Protective Services. Pediatrics 129:509–515, 2012Crossref, MedlineGoogle Scholar

40 Finn NK, Torres EM, Ehrhart MG, et al.: Cross-validation of the Implementation Leadership Scale (ILS) in child welfare service organizations. Child Maltreatment 21:250–255, 2016Crossref, MedlineGoogle Scholar

41 Patterson MG, West MA, Shackleton VJ, et al.: Validating the Organizational Climate Measure: links to managerial practices, productivity and innovation. Journal of Organizational Behavior 26:379–408, 2005CrossrefGoogle Scholar

42 Brown RD, Hauenstein NMA: Interrater agreement reconsidered: an alternative to the RWG indices. Organizational Research Methods 8:165–184, 2005CrossrefGoogle Scholar

43 Fleenor JW, McCauley CD, Brutus S: Self-other rating agreement and leader effectiveness. Leadership Quarterly 7:487–506, 1996CrossrefGoogle Scholar

44 Shanock LR, Baran BE, Gentry WA, et al.: Polynomial regression with response surface analysis: a powerful approach for examining moderation and overcoming limitations of difference scores. Journal of Business and Psychology 25:543–554, 2010CrossrefGoogle Scholar

45 Edwards JR: Alternatives to difference scores: polynomial regression and response surface methodology; in Advances in Measurement and Data Analysis. Edited by Drasgow F, Schmitt NW. San Francisco, Jossey-Bass, 2002Google Scholar

46 Shanock LR, Allen JA, Dunn AM, et al.: Less acting, more doing: how surface acting relates to perceived meeting effectiveness and other employee outcomes. Journal of Occupational and Organizational Psychology 86:457–476, 2013Google Scholar

47 Lee YT, Antonakis J: When preference is not satisfied but the individual is: how power distance moderates person-job fit. Journal of Management in Medicine 40:641–675, 2014Google Scholar

48 Morris JA, Brotheridge CM, Urbanski JC: Bringing humility to leadership: antecedents and consequences of leader humility. Human Relations 58:1323–1350, 2005CrossrefGoogle Scholar

49 Owens BP, Hekman DR: Modeling how to grow: an inductive examination of humble leader behaviors, contingencies, and outcomes. Academy of Management Journal 55:787–818, 2012CrossrefGoogle Scholar

50 Owens B, Hekman DR: How does leader humility influence team performance? Exploring the mechanisms of contagion and collective promotion focus. Academy of Management Journal, in pressGoogle Scholar

51 Guerrero EG, Aarons GA, Palinkas LA: Organizational capacity for service integration in community-based addiction health services. American Journal of Public Health 104:e40–e47, 2014Crossref, MedlineGoogle Scholar

52 Ehrhart MG, Aarons GA, Farahnak LR: Assessing the organizational context for EBP implementation: the development and validity testing of the Implementation Climate Scale (ICS). Implementation Science 9:157, 2014Crossref, MedlineGoogle Scholar

53 Aarons GA, Ehrhart MG, Farahnak LR, et al.: Leadership and Organizational Change for Implementation (LOCI): a randomized mixed method pilot study of a leadership and organization development intervention for evidence-based practice implementation. Implementation Science 10:11, 2015Crossref, MedlineGoogle Scholar

54 Glisson C, Schoenwald SK, Hemmelgarn A, et al.: Randomized trial of MST and ARC in a two-level evidence-based treatment implementation strategy. Journal of Consulting and Clinical Psychology 78:537–550, 2010Crossref, MedlineGoogle Scholar

55 Zohar D, Polachek T: Discourse-based intervention for modifying supervisory communication as leverage for safety climate and performance improvement: a randomized field study. Journal of Applied Psychology 99:113–124, 2014Crossref, MedlineGoogle Scholar

56 Lewis CC, Weiner BJ, Stanick C, et al.: Advancing implementation science through measure development and evaluation: a study protocol. Implementation Science 10:102, 2015Crossref, MedlineGoogle Scholar

57 Klein KJ, Conn AB, Sorra JS: Implementing computerized technology: an organizational analysis. Journal of Applied Psychology 86:811–824, 2001Crossref, MedlineGoogle Scholar

58 Proctor E, Silmere H, Raghavan R, et al.: Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Administration and Policy in Mental Health and Mental Health Services Research 38:65–76, 2011Crossref, MedlineGoogle Scholar

59 Ospina S, Foldy E: A critical review of race and ethnicity in the leadership literature: surfacing context, power and the collective dimensions of leadership. Leadership Quarterly 20:876–896, 2009CrossrefGoogle Scholar