The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×

Abstract

Objective:

Routine evaluation of mental health services has become widespread, and the use of patient-reported outcome measures (PROMs) as clinical aids or discussion tools has been receiving increasing attention. The purpose of this scoping study was to provide a typology of the diverse ways in which studies reporting on PROM use in mental health services have utilized PROMs.

Methods:

Iterative scoping searches of the literature identified articles reporting on the use of PROM feedback in mental health settings, which were then categorized to develop a typology along a dimension of intensity of use of PROM feedback, ranging from no feedback to patient and clinician to clinician-patient discussion that followed a formalized structure.

Results:

Of the 172 studies that were identified, 27 were grouped into five categories, ranging from studies in which there was no PROM feedback to clinician or patient to studies in which a formalized structure was available by which PROM feedback could be discussed between clinician and patient. Of the 11 studies in the category with formalized feedback, nine reported some significant effects of feedback compared with a control condition, and two reported partial significant effects.

Conclusions:

The proposed procedural typology helps explain the diversity of results from studies reporting on the effects of PROM feedback, by highlighting that PROM feedback appears to be more effective when integrated in a formalized and structured manner. Future work is required to isolate these effects from common procedural correlates, such as monitoring of the therapeutic alliance.

Routine formal evaluation of the outcomes of patient care has become increasingly widespread and plays an important role in mental health service provision (1). Over time, an immense array of patient-reported outcome measures (PROMs) has been developed, with the aim of including patients’ perspectives in the process of health service provision. PROMs have typically been defined as patient-rated standardized measures of health or functional status, disability, participation, quality of life, well-being, or other specific and relevant outcomes of treatment, such as depression or anxiety (2,3).

Systematic reviews of the literature on the use of PROMs in clinical practice have typically found that use of PROMs is associated with improvements in some aspects of care outcomes and quality of care. However, clear conclusions are difficult to derive because of methodological limitations and lack of clarity regarding the goals and mechanisms of using PROMs (4,5). Greenhalgh (6) provided an overview of the various ways and purposes of PROM use in clinical practice and presented the following categories: screening tools, monitoring tools, promotion of patient-centered care, decision aids, methods to facilitate communication among multidisciplinary teams, and evaluation of the effectiveness of routine care and assessing quality of care. The first three uses involve individual-level data, and the last three involve group-level data.

Boyce and Browne (7) systematically reviewed studies that investigated the effects of providing PROM feedback to health care professionals. Only one of the 16 studies in their review reported an overall positive effect. This study reported on the results of an intervention at a hospital-based psychotherapy clinic (8). That study used as the PROM the 45-item Outcome Questionnaire (OQ-45) (9), which assesses client progress in therapy. Patients in the patient-therapist feedback group, in which results from repeated PROM administration were discussed between patient and therapist, later showed significantly greater improvements in PROM scores than patients in the group that received treatment as usual and those in another group in which only the therapist received PROM feedback.

Using PROM feedback with patients is consistent with the principles of recovery, which focuses on the transformative aspects of overcoming mental health issues and thus emphasizes self-determination and an individual’s sense of agency (10,11). PROM feedback not only allows the clinician to provide information on the patient’s progress but also attempts to capture patients’ view about whether they are progressing and helps patients appraise themselves and reflect on their own recovery journey.

Previous reviews of the effects of using PROMs have focused on various aspects of PROMs, such as the purpose and nature of applications (6), or the usefulness of PROM feedback at the patient and group level (7). However, a systematic description of the range of procedures by which patient feedback is obtained in mental health services is lacking. In particular, the various levels of provider-patient communication associated with these procedures have not been systematically explored. The purpose of this scoping study was to provide a typology of the ways in which studies reporting on PROM use in mental health services have administered PROMs. Understanding the scope of the literature and categorizing studies by levels of intensity of PROM feedback highlights new ways of analysis that can help explain the diversity of outcomes in investigations of the effects of PROM use (6,7) and provide clarity about whether the provision of PROM feedback is indeed associated with positive outcomes.

Methods

Scoping Study

Scoping studies are particularly suitable when the goal is to determine the scope and nature of a field that includes studies with a large range of methodologies (12,13). The procedures used are similar to those used in systematic literature reviews but tend to focus more on breadth rather than depth of the literature and thus do not exclude studies on the basis of quality criteria. Because of the diversity of the study methods examined, the common analytical framework is a descriptive-analytical method within the narrative tradition (12). Scoping studies chart the evidence and procedures of studies to increase conceptual clarity and to map the conceptual boundaries of a specific topic area (14).

Note that various terms are used to describe people receiving mental health services, including patients, clients, consumers, and service users. In order to be consistent with the established term PROM, we generally refer to this group as patients, while acknowledging the sensitivity of this term, particularly to those who regard themselves as service users in recovery.

Search Strategy

This scoping study was guided by an iterative search strategy (12). After we became familiar with the literature on PROMs, we conducted structured searches on the database Scopus for peer-reviewed journal articles, with no restrictions on year of publication or language. Given the variety of terms used to describe this broad topic, search strategies were initially based on a related systematic review in palliative care (15), as well as on other recommendations in regard to the most sensitive and specific combination of terms with mental health content (16). Our review focused on PROM use in mental health settings, although the focus was initially broadly defined to capture a wide range of articles. The database search retrieved 59 articles, of which 13 were retained for more detailed review (3,1728). Manual searches yielded two further review articles (7,29). After iterative searches of reference lists, citation searches, and specific searches of articles from prominent researchers in the area, a total of 166 articles were identified. Of these, 109 were excluded because they used measures that were not standardized PROMs or because they were not about research in mental health settings but about mental health aspects of other fields, such as oncology, rehabilitation, general clinical practice, or substance abuse. Studies were also excluded if they merely reported on psychometric properties of PROMs, were surveys on the uptake of PROMs, or were opinion pieces. Of the remaining 57 articles, 28 were review articles, and 29 empirical articles were categorized as outlined below. During the peer-review process, the anonymous peer reviewers identified another six studies that were also included.

Categorization of Articles

Scoping studies follow an iterative process (12) that continually refines mapping criteria as new evidence is identified and analyzed. Therefore, we met regularly for discussion to agree on adequate ways to categorize articles into levels of intensity of PROM feedback used. The final typology is presented in the box on the next page. It included five categories, ranging from category 1 (PROM scores not fed back to clinician or patient) to category 5 (PROM scores fed back to clinician and patient, with a formalized structure to guide clinician-patient discussions).

Description of criteria used to categorize articles by levels of intensity of feedback discussion of patient-reported outcome measures (PROMs)

Category 1: PROMs used with no feedback provided to the clinician or patient

Studies that used PROMs to assess the effect of treatment or an intervention, typically by pre-post comparison of measures. The outcome reports were not fed back to the clinicians or the clients and in no way informed the intervention or treatment.

Category 2: PROM results reported back to the clinician

Studies in which clients completed PROMs at some stage of their treatment, often at baseline and after treatment. The outcome reports were routinely fed back to clinicians but not to the clients, although clinicians were able to provide PROM feedback to their clients at their own discretion. This way of using PROM feedback enabled the clinicians to make decisions regarding the treatment plan.

Category 3: PROM results reported back to the clinician and client

Studies that used PROMs to monitor the treatment outcome and fed back the outcome reports not only to the clinicians but also to the clients. Clinicians were able to react to clients’ progress, but no process of including the outcome report in a discussion between the clinician and client was proposed; if discussions occurred, they were therefore incidental.

Category 4: PROM results reported back to the clinician and client, with opportunities created for discussion

Studies that reported on PROM feedback to both clinician and client, with opportunities created for discussion of outcomes. Such discussion was able to influence subsequent treatment, but the discussion was unstructured or the authors did not report a structure or process.

Category 5: PROM results reported back to the clinician and client, with a formal procedure in which a discussion of PROMs can affect subsequent treatment

Studies in which PROM results were fed back to the clinician and client and were available for discussion for the purpose of informing subsequent treatment. The procedure for including PROMs in any such discussion was formalized and structured through use of guidelines and recommendations.

Although studies in category 1 by definition cannot provide any information on the effects of PROM feedback, retaining this category was useful for the purpose of establishing a typology of PROM feedback provision. Category 2 studies provide PROM feedback to clinicians, and studies in category 3 provide feedback to both clinicians and patients. In studies of categories 2 and 3, discussion of PROM results may take place, although entirely at the discretion of the clinician. Any such discussion would therefore be incidental only. In studies included in category 4, clinician-patient PROM discussion is actively encouraged, but no formal structure guides this process. And finally, category 5 includes studies in which clinician-patient PROM discussion on the basis of available formal guidelines was actively encouraged.

For any study to be allocated to one of the five categories, group consensus was required. Two of the authors (CUK and KJC) carefully read and categorized the articles independently and iteratively. Disagreement was resolved by discussion, which at times resulted in further refinement of the category wording. The other authors assisted with categorization of a selection of articles.

Most studies included control groups (typically category 1), but our categorization was based on the procedure of the intervention group. Some studies included two interventions that belonged to different categories, in which case the study was allocated to the highest category (8,3032). Of the 35 reviewed studies, four could not be assessed because of incomplete information (24,3335). Three additional studies were removed because they reported on a data set that was used in another study included in the review (3638). Two studies reported on different subgroups of the same data set and were treated as one study (39,40).

Results

Table 1 lists the 27 studies included in this review and provides a description of each study’s sample, PROM feedback procedure, and results. Two studies belonged to category 1 (41,42), eight to category 2 (39,4349), four to category 3 (32,5052), two to category 4 (53,54), and 11 to category 5 (8,30,31,5562). Almost half the studies involved samples from the United States (8,30,31,41,42,44,46,47,49,58,59,61,62). One article reported on a study conducted in six European countries (57). Apart from an Australian study (53), the remaining studies were from European countries: Germany (39,43,48,54), the United Kingdom (50,52,60), the Netherlands (32,45), Ireland (56), Norway (55), and Sweden (51). The study populations were diverse, including hospital inpatients (39,43,53,54,62), hospital or institution outpatients (8,32,45,51,61), and clients of a variety of community-based services (41,44,48,50,52,55,57,60). Eight studies reported data from clients of university counseling services (30,31,42,46,47,49,56,58,59), all of which, except for three (56,58,59), were from the same university.

Table 1 Summary of studies identified by a scoping study on use of patient-reported outcome measures (PROMs), categorized by intensity of use of PROM feedbacka

Category and studyDesignSamplePROMProcedure for PROM useResults
Category 1
 Christensen et al., 2004 (41)Randomized controlled trial (RCT) 134 married couples with serious and chronic distress undergoing a free therapy program in 2 U.S. cities32-item Dyadic Adjustment Scale, a self-report measure of marital satisfaction; 3 subscales from the Marital Satisfaction Inventory–Revised: 22 items on global distress, 19 on problem-solving communication, and 13 on affective communication; 14-item Marital Status Inventory, a measure of thoughts and tentative and actual steps undertaken toward divorce; 68-item Mental Health Index, a measure of current symptoms, life satisfaction, and well-being (subscale of the Compass Outpatient Treatment Assessment System)Couples were randomly assigned to 1 of 2 treatment conditions (comparing 2 treatment types). All completed various screening measures before and at intake. At intake and 13 and 26 weeks, couples completed all PROMs. At the end of treatment, clients completed measures of relationship satisfaction and an evaluation of services.The 2 treatment types were compared on change in PROM scores.
 Hannan et al., 2005 (42)Single-group posttest618 clients at a U.S. university outpatient clinic 45-item Outcome Questionnaire (OQ-45) (9), a measure of client progress on 3 dimensions: subjective discomfort (25 items), interpersonal relationships (9 items), and social role performance (11 items)Clients completed the OQ-45 before each therapy session. Routine feedback to therapists was suspended for a period of 3 weeks to examine therapists’ ability to estimate client progress.Therapists tended to overpredict client improvement and not to predict deterioration.
Category 2
 Berking et al. 2006 (43)RCT118 inpatients at a German psychosomatics, psychotherapy, and behavioral medicine clinic 11-item German version of the Brief Symptom Inventory (BSI); 12-item German version of the Inventory of Interpersonal Problems (IIP), a self-rated measure of interpersonal difficulties; 10-item Incongruence Questionnaire (INK), assessing extent of congruence of current situation with one’s motivations and goals; 42-item Questionnaire to Assess Changes in Experiencing and Behavior (VEV), a measure of therapy-induced changes in experience and behaviorPatients receiving cognitive-behavioral therapy (CBT) were randomly allocated to either a feedback or a no-feedback condition. All patients completed the Emotionality Questionnaire, BSI, IIP, and INK on day 1, day 3, and then weekly. In the feedback condition, therapists received results the following day. At the end of therapy, patients completed the VEV.Average improvement on all outcome measures was significantly greater in the feedback group.
 Bickman et al., 2011 (44)RCT (substantial attrition)340 youths (ages 11–18) receiving home-based services from a private, for-profit, behavioral health organization at 28 U.S. sites 32-item Symptoms and Functioning Severity Scale, a measure of the frequency of emotions or behaviors linked to typical mental disorders of youthsClients were randomly allocated to an experimental or a control group. At the end of a treatment session, clients completed a paper questionnaire. Clinicians of clients in the experimental group received weekly feedback (mean scores and alerts) and cumulative feedback every 90 days. Clinicians of clients in the control group received only the 90-day feedback.As indicated by PROMs, clients in the experimental group improved significantly faster than those in the control group.
 de Jong et al., 2012 (45)RCT413 outpatients receiving psychiatric treatment at a Dutch medium-sized health care institution OQ-45, Dutch versionPatients were randomly allocated to an experimental feedback group or a no-feedback control group. All patients completed the PROM after sessions 1, 3, and 5 and then after every 5th session. After each PROM completion, therapists in the feedback group received an e-mail with information on the patient’s PROM progress. No alarms were used, but therapists were able to identify “not-on-track” patients. The study also examined to what extent therapist characteristics moderate effects of feedback, and therapists completed a use-of-feedback questionnaire at the end of the study.For clients identified as “not on track,” feedback resulted in a significant positive effect on PROMs when therapists reported using feedback with their clients.
 Lambert et al., 2001 (46)RCT609 clients at a U.S. university counseling center OQ-45Clients were randomly allocated to an experimental or a control group. All clients completed the OQ-45 at intake and before each treatment session. Data for the control group were not shared with clients or therapists. In the experimental group, therapists received results on a graph and were alerted to the client’s progress with a color-coding system. Clinicians’ reactions to the feedback were not managed, with no mechanism to use feedback in any systematic way.For clients identified as “not on track,” feedback resulted in significantly better outcome scores and significantly longer treatment duration. For clients “on track,” no significant differences were noted in outcome measures, and number of treatment sessions was significantly fewer for the feedback condition.
 Lambert et al., 2002 (47)Quasi-experimental; intervention conducted after data for the control group had been collected1,020 clients at a U.S. university counseling center OQ-45Intended as a replication of Lambert et al. (46) with a larger sample. Clients in 1999 summer and fall semesters were assigned to the control group; clients in 2000 winter and spring semesters were assigned to the experimental (feedback) condition. All clients completed the OQ-45 at intake and before each treatment session. Data from the control group were not shared with clients or therapists. In the experimental group, therapists received results on a graph and were alerted to the client’s progress with a color-coding system. Clinicians’ reactions to the feedback were not managed, with no mechanism to use feedback in any systematic way. Therapists with clients in the feedback group received a tracking form, which suggested possible clinician actions in response to feedback.For “not-on-track clients,” feedback resulted in significantly better outcome scores and significantly longer treatment duration. For “on-track” clients, no significant differences were noted in outcome measures or treatment duration.
 Lutz et al., 2012 (48)RCT1,708 clients receiving outpatient psychotherapy in 1 of 3 regions in GermanyGerman version of the BSI; German version of the IIP; 12-item Short-Form Health Status Instrument; additional measures depending on the patient’s main diagnosis.Clinicians were randomly allocated to an experimental or a control group (treatment as usual). In both groups, PROMs were used at intake, discharge, and 1 year later. In the experimental group, patients completed PROMs 5 times during treatment. In the experimental group, therapists received immediate PROM feedback (summary and graphs) about their patients. There were no prescriptive guidelines on use of PROM feedback; therapists could incorporate this information into therapy at their discretion. Detailed information can be found in the final report of the so-called TK model. Lutz and colleagues later noted that the study results need to be interpreted with caution because of some compromising externally imposed design modifications.Feedback did not affect PROM scores. The groups also did not differ in treatment length.
 Probst et al., 2013 (39)RCT252 inpatients recruited from a psychosomatics department of a hospital and a psychosomatics hospital, both in Germany. Probst et al. (39) reported results from 43 patients at risk of outcome deterioration. Probst et al. (40) reported on 209 patients considered to be “on track.”OQ-45, German versionPatients were randomly allocated to an experimental or a control group. All patients completed the OQ-45 every weekend. On Mondays, therapists of patients in the experimental group received feedback reports. Therapists could choose to discuss feedback with patients. Also included was the Assessment of Signal Cases scale, which measures therapeutic alliance, motivation for change, social support, and critical life events. This is part of clinical support tools (CST), which provide empirically based problem-solving strategies.For patients at risk of deterioration, feedback significantly improved outcome scores (39). For patients “on track,” feedback did not have a significant effect (40).
 Whipple et al., 2003 (49)Quasi-experimental; random assignment to experimental and intervention groups; nonrandom assignment to 1 of the experimental groups358 adult clients in a U.S. university counseling centerOQ-45Clients were randomly allocated to an experimental (feedback) or a control group. All clients completed the OQ-45 at intake and before each treatment session. In the feedback group, therapists received results on a graph, along with suggested decision rules, and were alerted to the client’s progress with a color-coding system. Therapists of clients in the feedback group who were considered “not on track” received a tracking form, which suggested possible clinician actions in response to feedback. The experimental group was further divided into a feedback-only group and a feedback plus CST group. However, use of CST was nonrandom and depended on therapists’ decisions to use CST.For clients “not on track,” feedback plus CST resulted in significantly higher outcome scores than feedback only, which in turn resulted in significantly higher scores than no feedback. For clients “on track,” no significant group differences were noted. Clients considered “not on track” in either of the 2 feedback groups remained in therapy significantly longer than “not-on-track” clients in the control group. For “on-track” clients, therapy duration was significantly longer for the control group than for the 2 feedback groups.
Category 3
 Cheyne and Kinn, 2001 (50)Pilot RCT42 consecutive referrals for alcohol counseling at a range of U.K. community-based cognitive-behavioral counseling servicesSchedule for the Evaluation of Individual Quality of Life (SEIQoL) (68), on which respondents rate the importance of life areas to their overall quality of lifeClients were randomly allocated to an experimental or a control group. Clients in the experimental group completed the SEIQoL together with the therapist at the first and final counseling sessions and at 4- and 8-week review appointments. Four weeks after completion of treatment, all participants were mailed a questionnaire about satisfaction with services and outcomes achieved (42% response rate).The experimental condition had a larger proportion of clients with favorable outcomes (not statistically significant). A separate publication (36) reported qualitative data on therapists’ positive experiences of completing the SEIQoL with clients.
 de Jong et al., 2014 (32)RCT475 outpatients at Dutch private psychotherapy practices and mental health institutes OQ-45, Dutch versionPatients were randomly allocated to a no-feedback control group, a therapist-only feedback group, or a therapist-patient feedback group. All patients completed the OQ-45 online (typically on a laptop in the therapist’s waiting room) before each therapy session but not more than once a week. In the 2 feedback conditions, PROM scores and feedback messages were generated immediately, and subsequent discussion of feedback was at the therapists’ discretion.Group differences in OQ-45 scores at treatment end were not significant, although the therapist-client feedback group had the smallest number of deteriorated cases. For “not-on-track” clients, feedback prevented negative outcomes.
 Hansson et al., 2013 (51)RCT262 patients in 2 general psychiatry outpatient clinics in SwedenOQ-45, Swedish versionPatients were randomly allocated to an experimental or a control group. Patients completed the OQ-45 at intake and at each clinic visit but not more than once a week. Therapists of patients in the experimental group received patients’ OQ-45 scores via a Web application before each subsequent visit; these scores were also handed to the patient. In the control group, neither therapist nor patient received feedback.Patients in the experimental group had greater improvements in their outcome scores (not statistically significant).
 Slade et al., 2006 (52)RCT160 patients of 8 U.K. community mental health teams12-item Manchester Short Assessment (MANSA); a measure of quality of lifePatients were randomly allocated to an experimental or a control group. Both groups received treatment as usual. Patients and therapists in the experimental group also completed a monthly mailed questionnaire and were sent identical feedback every 3 months in the form of graphics and text that also highlighted areas of disagreement between patient and therapist.No significant group differences were noted in quality-of-life scores or in scores of patient-rated unmet needs and other secondary measures rated by therapists. Patients in the experimental group had significantly fewer psychiatric inpatient days.
Category 4
 Newnham et al., 2010 (53)Historical cohort design1,308 consecutive inpatients and day patients participating in a 10-day CBT group at a private psychiatric hospital in Australia5-item World Health Organization Well-Being Index (WHO-5), a measure of positive mental health; 4 subscales (4-item vitality, 2-item social functioning, 3-item role emotion, and 5-item mental health) of the Short Form–36 Health Survey (SF-36); 21-item Depression Anxiety Stress Scale (DASS-21), a measure of negative emotional symptomsPatients in cohort 1 received treatment as usual. Patients in cohort 2 completed the WHO-5 every second day but did not receive feedback (scores and a graph with explanation) until the final therapy day, when they could discuss their scores during the group session. Patients in cohort 3 completed the WHO-5 every second day and received the same WHO-5 feedback from their therapists midway through treatment (day 5) and on the final day, also with opportunities to discuss scores. Therapists were not given specific instructions on use of feedback. Patients in all cohorts also completed the DASS-21 and SF-36 at admission and discharge.No effect of feedback on WHO-5 scores was noted. For patients “not on track,” feedback was significantly associated with decreased depressive symptoms (DASS-21) and the vitality and role emotion subscales of the SF-36 but not with any other subscale score. It was later noted that after treatment, “on-track” patients in cohort 3 were significantly less likely than “on-track” patients in cohort 2 to be readmitted.
 Puschner et al., 2009 (54)RCT264 adults receiving inpatient treatment at a German psychiatric hospital OQ-45, German versionClinicians were randomly allocated to an experimental or a control group. All patients completed German version of the OQ-45 at intake, every week thereafter, and at discharge. In the experimental group, patients and clinicians received summary information 1 or 2 days after PROM completion. This information consisted of graphs, text with treatment recommendations and possible alert messages, and encouragement for patients and clinicians to discuss the results. However, no guidelines for such discussion were provided. Patients and clinicians in the control group received no feedback.No significant effect of feedback on treatment outcome was noted as measured by the OQ-45. Most patients found the feedback useful for motivation, but their views about its effectiveness were mixed. Most patients reported that they rarely discussed feedback with professionals or caregivers.
Category 5
 Anker et al., 2009 (55)RCT205 couples seeking outpatient couples therapy at a family counseling agency in Norway4-item Outcome Rating Scale (ORS) (63), derived from the OQ-45; 15-item Locke-Wallace Marital Adjustment Test (LW), covering aspects of marital functioning and satisfactionParticipants were randomly allocated to an experimental group (feedback) or a control group (treatment as usual). Participants completed the ORS and LW before the first session, the ORS before each subsequent session, and the ORS and LW 6 months after the final session. In the control group, the ORS was completed in the presence of a secretary, and results were not fed back to either participant or therapist. In the experimental group, the ORS was rated in the presence of the therapist before each session and scored immediately. Therapists were trained to incorporate into treatment the ORS feedback and associated computer-generated treatment and progress feedback. They were also advised to show the results to clients and initiate discussions, although this was not monitored. Clients also completed the Session Rating Scale (SRS) (66), a measure of the therapeutic alliance.Improvements in ORS scores were significantly greater in the experimental group than in the control group, which was maintained at 6-month follow-up.
 Harmon et al., 2007 (30)Quasi-experimental; nonrandom group allocation and a comparison group from archival data1,374 adult clients seeking treatment at a large U.S. university counseling center OQ-45Because of attrition, not all clients could be allocated randomly to the 2 intervention groups (feedback to both therapists and clients and feedback to therapists only). Archival data (N=1,445) from the same clinic and therapists served as a no-feedback control group. Clients completed the OQ-45 at intake and weekly thereafter. Before each session, the previous week’s scores were made available as feedback in the form of graphs and a color-coding system to categorize client progress. In both groups, clients considered “not on track” were further randomly allocated to either CST feedback (results of additional measures of therapeutic alliance, stages of change, and social support) or no CST feedback. Clients who received feedback and were not responding well to treatment were encouraged to discuss their concerns about lack of progress and ideas for therapy modifications. Clinicians’ reactions to the PROM feedback were not managed. Therapists who received feedback plus CST were able to consult a CST manual for treatment suggestions based on feedback data.Mean OQ-45 scores improved significantly more for the feedback groups than for the archival no-feedback control group. No significant difference was noted between the 2 intervention groups. However, CST feedback (in addition to PROM feedback to the therapist only or to the therapist and client) resulted in significantly improved outcomes, compared with feedback without CST. Clients considered “not on track” received significantly more sessions in the feedback conditions than clients in the control group.
 Hawkins et al., 2004 (8)RCT201 adults seeking outpatient psychotherapy services at a U.S. hospital-based clinicOQ-45Clients were nonrandomly assigned to therapists on the basis of therapist availability, but clients were subsequently assigned randomly to 1 of 2 treatment conditions (feedback to both therapist and client or feedback to therapist only) or the control condition (treatment as usual with no PROM feedback). All clients completed the OQ-45 at intake and after each treatment session. In the feedback conditions, the previous week’s scores were made available before each session in the form of graphs and a color-coding system to categorize client progress and make treatment recommendations (similar to 46,47). However, clinicians’ reactions to the PROM feedback were not managed or monitored. In the client-therapist feedback condition, clients also received written feedback messages, and those identified as not progressing were encouraged to discuss personal concerns about their progress and potential treatment modifications. A format was available to discuss treatment progress, although interactions with patients were not monitored.The greatest improvement in OQ-45 scores was for clients in the client-therapist feedback condition, followed by therapist-only feedback and the control condition (statistically significant). For clients considered “not on track,” no significant group differences were noted, although this may have been attributable to small sample size. No significant group effects on treatment duration were noted.
 Murphy et al., 2012 (56)RCT110 adult clients at a university counseling service in IrelandORSThe ORS is typically administered in conjunction with the SRS, a measure of therapeutic alliance. The purpose was to test the effects of ORS on its own. Clients were randomly allocated to an experimental group (feedback to both therapist and client) or a no-feedback control group. All clients completed the ORS at intake and before each subsequent session. In the control group, clients completed the ORS in the presence of a researcher (except for the first administration), and neither client nor therapist received feedback on ORS scores. In the experimental group, clients completed the ORS in front of the therapist by using a software program, which instantly generated score feedback, such as in the form of progress graphs. Therapists could decide whether to react to this feedback, such as whether to discuss it with clients. Therapists received an ORS and SRS manual that offered strategies and recommendations for appropriate courses of action in response to ORS scores.Feedback resulted in significant differences for clients with anxiety issues but not for clients with depression, relationship issues, or other concerns. No effect of feedback on treatment duration was noted.
 Priebe et al., 2007 (57)RCT507 patients with severe and enduring mental illness who used community psychiatric services in 1 of 6 European countries (Germany, the Netherlands, Spain, Sweden, Switzerland, and the United Kingdom)MANSAClinicians were randomly allocated to an experimental or control group (treatment as usual). Clinicians in the experimental group implemented a manualized computer-mediated intervention. In this feedback intervention, patients rated their quality of life approximately every 2 months during routine care; ratings were followed up by questions about whether patients wanted additional support for particular domains. Patients in the control group completed the quality-of-life questionnaire before treatment and 12 months later. Other measures included satisfaction with treatment and unmet care needs.Quality-of-life scores were significantly higher for the experimental group 12 months posttreatment, despite the presence of ceiling effects in the measure. The effect size for this group difference was higher when only results of participants with a low initial score were analyzed.
 Reese et al., 2009 (58)RCTStudy 1: 74 clients at a U.S. university counseling center; study 2: 74 clients receiving individual therapy at a U.S. graduate training clinic for a marriage and family therapy master’s programORSStudy 1: clients were randomly assigned to an experimental (feedback) or control group. Clients in the control group were given the ORS at intake and the end of treatment. Responses were not analyzed by the therapist, nor were any scores made available to the therapist. In the feedback condition, clients completed the ORS at the beginning of each session and the SRS toward the end of each session. ORS graphs were generated as feedback, and general guidelines were available on how the therapist could proceed, although this was not monitored or managed. Study 2: unlike study 1, therapists rather than clients were randomly allocated to either feedback or no-feedback groups. Another difference was that clients in the control group completed the ORS at the beginning of each session. However, results were not seen by the therapists in the control condition.In both studies, clients in the experimental (feedback) group had significantly larger gains in ORS scores than clients in the control group. No significant differences in number of sessions attended were noted.
 Reese et al., 2010 (59)RCT46 heterosexual couples receiving couples therapy at a U.S. graduate training clinic (master’s program) for marriage and family therapy ORSThe study was intended as a replication of Anker et al. (55) with a U.S. sample. Couples were randomly assigned to an experimental (feedback) or control (treatment as usual) condition. All clients completed the ORS at the start of each session and the SRS at the end of each session. The feedback group received ORS graphs as feedback, and general guidelines were available on how the therapist could proceed, although this was not monitored or managed.Couples in the experimental (feedback) group made significantly greater and faster gains in ORS scores than clients in the control group.
 Schmidt et al., 2006 (60)RCT61 patients with bulimia nervosa or eating disorder not otherwise specified at a U.K. specialist eating disorder unit received guided self-help CBT6-item Short Evaluation of Eating Disorders (SEED), a self-rated measure of severity of anorexia and bulimia symptoms; 14-item Hospital Anxiety and Depression Scale, a self-rated assessment of anxiety and depression symptomsPatients were randomly assigned to an experimental (feedback) or control (no feedback) group. Patients in the feedback group received a personalized letter after initial assessment, including feedback from physical examination and blood tests. A symptom feedback form was completed collaboratively by the patient and therapist halfway through treatment, and patients also received an end-of-treatment feedback letter from their therapist. All patients completed all PROMs before allocation to groups and at the end of treatment and the SEED only at 6-month follow-up. Throughout treatment, patients in the feedback group received computerized PROM feedback every 2 weeks. Patients in the control group completed the same number of computerized assessments during treatment but did not receive any of the feedback. Feedback in the experimental group was also guided by an outcome-monitoring and feedback system, providing automated feedback about progress.Feedback did not have an effect on treatment uptake or dropout. Feedback resulted in significantly greater improvements on scores for dietary restriction but not on scores for bingeing, vomiting, or exercise.
 Simon et al., 2012 (61)RCT370 adults seeking psychotherapy services at a U.S. hospital-based outpatient clinicOQ-45Clients were randomly assigned to an experimental (feedback) or control (no feedback) condition. All clients completed the OQ-45 before each session. CST was used for clients who were “not on track” in the feedback condition; the tool provided therapists with, for example, decision trees for problem solving, treatment suggestions, and progress alerts and tools to deal with patients who were “not on track.” Therapists were instructed to present the PROM feedback to their clients, although this was not monitored.OQ-45 scores in the feedback group improved significantly more than those in the no-feedback control group (small effect size). The mean number of sessions was not significantly different between groups.
 Simon et al., 2013 (62)RCT133 adults seeking inpatient treatment at a U.S. eating disorder hospital OQ-45The procedure was identical to that of Simon et al. (61). The purpose was to extend investigations of the effect of PROM feedback to a new client population.PROM scores in the feedback group improved significantly more than those in the no-feedback control group (small effect size). Body mass index increased in both conditions, with no significant group differences.
 Slade et al., 2008 (31)Quasi-experimental; random assignment to 1 of 2 feedback types, with a comparison group from archival data1,101 adult clients in a U.S. university counseling center, compared with archival data from 2,818 clients under no-feedback and feedback conditions in the same clinic (30,46,47,49). Data reported only for patients considered “not on track”OQ-45Clients were randomly assigned to 1 of 2 conditions (feedback to both therapist and client or feedback to therapist only). Archival data from the same clinic and therapists allowed comparisons with no-feedback conditions and delayed feedback conditions. Unlike previous studies in the same clinic in which feedback was delayed by 1 week (30,46,47,49), this study used an electronic feedback system that provided instant PROM feedback. In the therapist-only feedback condition, therapists were encouraged to use feedback in treatment, but their reactions to PROM feedback were not managed or monitored. In the client-therapist feedback condition, clients also received written feedback messages, and those identified as not progressing were encouraged to discuss personal concerns about their progress and potential treatment modifications. CST feedback and decision trees were also provided to clients and therapists for clients considered “not on track.” The focus was only on patients “not on track.”No significant differences were noted between the 2 treatment conditions, but these groups showed significant improvements compared with the no-feedback group. Immediate electronic feedback did not lead to significantly larger gains in outcome scores. Clients in the no-feedback condition received significantly more treatment sessions.

aCategory 1, PROMs used with no feedback provided to the clinician or patient; category 2, PROM results reported back to the clinician; category 3, PROM results reported back to the clinician and client; category 4, PROM results reported back to the clinician and client, with opportunities created for discussion; category 5, PROM results reported back to the clinician and client, with a formal procedure in which a discussion of PROMs can affect subsequent treatment

Table 1 Summary of studies identified by a scoping study on use of patient-reported outcome measures (PROMs), categorized by intensity of use of PROM feedbacka

Enlarge table

Lambert and colleagues authored ten of the articles listed in Table 1 (8,30,31,39,42,46,47,49,61,62), and all of these used the OQ-45 (9). The OQ-45 was used in four additional studies (32,45,51,54), making it the most frequently used PROM. The second most frequently used PROM was the four-item Outcome Rating Scale (ORS) (63). This measure, derived from the OQ-45, was used in four of the studies listed in Table 1 (55,56,58,59).

Category 1 functions as a baseline in the typology presented in the box on this page. Only two articles belonged to this category (41,42), largely because the scoping strategy outlined above searched for articles that reported on the use of PROM feedback. Although articles in this category by definition cannot provide any information on the effectiveness of PROM feedback, these two articles serve as exemplars of procedures in which PROMs are used but no feedback is provided to the clinician or client.

All category 2 studies purported to investigate the effects on patient outcomes of PROM feedback to clinicians. Six of these were randomized controlled trials (39,4346,48), and the remaining two were quasi-experimental designs with close resemblance to the design of the other six studies (47,49). Table 2 summarizes information about studies that reported a significant effect of PROM feedback on PROM scores and on treatment duration. Two studies reported significant positive effects (43,44), and the remaining studies reported significantly larger improvements only for clients considered “not on track” or “at risk” (39,4547,49) or no effect (48). Effect sizes were generally small or medium. In four of the studies that reported data on treatment duration (4649), feedback was associated with significantly longer treatment for not-on-track clients; in three of these studies (46,48,49), feedback was also associated with significantly shorter treatment duration for on-track clients. One study reported no effect on treatment duration (39).

Table 2 Reported effects of feedback to patients of their scores on patient-reported outcome measures (PROMs) in studies identified by a scoping study

StudyEffect of feedback on PROM scoresaTreatment length
Category 2
 Berking et al., 2006 (43)Significant (d=.47–.50)Not reported
 Bickman et al., 2011 (44)Significant (d=.18)Not reported
 de Jong et al., 2012 (45)Significant positive effect only for “not-on-track” patients and when therapists reported use of feedbackNot reported
 Lambert et al., 2001 (46)Significant for “not-on-track” clients (d=.44); not significant for “on-track” clientsFeedback associated with significantly longer treatment for “not-on-track” clients and significantly fewer days for “on-track” clients
 Lambert et al., 2002 (47)Significant for “not-on-track” clients (d=.40); not significant for “on-track” clientsFeedback associated with significantly longer treatment for “not-on-track” clients
 Lutz et al., 2012 (48)Not significantFeedback associated with significantly shorter treatment; “not-on-track” patients received longer treatment and “on-track” patients less treatment
 Probst et al., 2013 (39)Significant for “at-risk” patients (d=.54); not significant for “on-track” patients (40)Not significant
 Whipple et al., 2003 (49)Significant for “not-on-track” clients (d=.70 and d=.28); not significant for “on-track” clientsFeedback associated with significantly longer treatment for “not-on-track” clients and significantly fewer days for “on-track” clients
Category 3
 Cheyne and Kinn, 2001 (50)Not significantNo difference in number of appointments
 de Jong et al., 2014 (32)Not significantNot significant
 Hansson et al., 2013 (51)Not significantNo difference in number of clinic visits
 Slade et al., 2006 (52)Not significantFeedback associated with significantly reduced inpatient days
Category 4
 Newnham et al., 2010 (53)Significant only for clients “not on track” and only for some of the measuresNot applicable (10-day program)
 Puschner et al., 2009 (54)Not significantNot reported
Category 5
 Anker et al., 2009 (55)Significant (d=.50)Not reported
 Harmon et al., 2007 (30)Both categories 2 and 5 significantly more improved than category 1 (d=.23 and d=.33, respectively); not significant for category 2 versus category 5Feedback associated with significantly longer treatment for “not-on-track” clients
 Hawkins et al., 2004 (8)Category 5 significantly more improved than both categories 2 and 1 (η2=.02 and η2=.04, respectively); categories 2 and 5 combined significantly more improved than category 1 (η2=.02)Not significant
 Murphy et al., 2012 (56)Significant for only a subgroup of the sampleNot significant
 Priebe et al., 2007 (57)Significant (d=.20 or d=.43) only for participants with low initial PROM scoresNot reported
 Reese et al., 2009 (58)Significant (η2=.07 and η2=.10)Not significant
 Reese et al., 2010 (59)Significant (d=.81)Not reported
 Schmidt et al., 2006 (60)Significant for only some measuresNot significant
 Simon et al., 2012 (61)Significant (η2=.02)Not significant
 Simon et al., 2013 (62)Significant (d=.30)Not significant
 Slade et al., 2008 (31)Categories 2 and 5 both significantly more improved than category 1 (d=.35 and d=.48, respectively); not significant for category 2 versus category 5Significantly more treatment sessions for category 1 (control group)

aThe following conventional cutoff values determine effect sizes: small, d>.20 and η2>.01; medium, d>.50 and η2>.06; large, d>.80 and η2>.14.

Table 2 Reported effects of feedback to patients of their scores on patient-reported outcome measures (PROMs) in studies identified by a scoping study

Enlarge table

All four category 3 studies (32,5052) were randomized controlled trials, and none reported a significant effect of PROM feedback to clinicians and patients compared with category 1 control conditions. One of the two category 4 studies reported a significant effect for only a subgroup of the sample and on some measures only (53), and the other category 4 study did not obtain a significant effect (54). However, although discussion of feedback was encouraged in that study (54), the authors reported that actual clinician-patient conversations about PROM feedback were rare.

Of the 11 studies in category 5, nine reported a significant effect of structured PROM feedback discussions (8,30,31,55,5759,61,62). Two studies obtained partial effects (56,60)—namely, significant results for only a subgroup in the sample or for only some of the outcome measures. Effect sizes were generally either small or medium.

Category 5 generally contained studies with more complex designs, such as multiple experimental groups. Three studies compared the effects of category 5 feedback with the effects of category 2 and category 1 feedback (8,30,31). In all three studies, feedback resulted in significantly greater improvements in PROM scores compared with category 1. However, two studies did not find a significant difference between the effect of category 2 and category 5 feedback (30,31), whereas one did (8).

Harmon and colleagues (30) reported significantly longer treatment duration for not-on-track clients, and Slade and colleagues (31) found that clients in the control condition required significantly more treatment sessions than clients in the feedback conditions. These two studies were also the only ones that used quasi-experimental designs in category 5. The other nine studies were randomized controlled trials, and of the six that reported on treatment duration data (8,56,58,6062), none found a significant effect of PROM feedback on treatment duration.

Discussion

This scoping study mapped previous research studies in mental health according to levels of intensity of PROM feedback use: no feedback (category 1), clinician-only feedback (category 2), feedback to clinicians and patients (category 3), encouragement of mutual PROM discussion (category 4), and availability of formalized mechanisms to guide such discussion (category 5). Previous systematic reviews concluded that evidence is lacking about whether PROM feedback to health care professionals improves outcomes, as illustrated by Boyce and Browne’s (7) review of systematic reviews. In their own systematic review, Boyce and Browne reported that only one of the 16 studies found a positive effect of PROM feedback, and six other studies found partial effects. Our review of the mental health literature indicated that of the 25 studies that provided information on the effectiveness of PROM feedback (categories 2 to 5), 11 reported significant effects with generally small to medium effect sizes, eight reported partial effects, and six reported no effects. Of the 11 studies in category 5, nine found significant effects and two found partial effects, indicating that formalized clinician-patient PROM feedback was most strongly associated with improved outcomes. Compared with categories 2 to 4, category 5 had a significantly higher proportion of studies reporting a statistically significant partial or full effect of feedback versus no effect (χ2=6.20, df=1, p<.05) and a significantly higher proportion of studies reporting a statistically significant full effect versus only a partial or no significant effect (χ2=11.40, df=1, p<.01).

The likelihood of reporting significant effects, however, did not increase in a linear fashion with feedback levels. Two of the category 2 studies found a significant effect, and five of the category 2 studies found a partial effect, whereas none of the category 3 studies and only one of the two category 4 studies reported a partial effect. Two studies that examined both category 2 and category 5 experimental conditions did not find a significant difference between outcomes of these two conditions in a sample of clients at a university counseling center (30,31). In a study of hospital outpatients, Hawkins and colleagues (8), in contrast, reported improved outcomes for patients in category 5 compared with those in category 2, which could indicate that clinician-patient feedback may be more effective than clinician-only feedback in specific settings only.

With the exception of one category 1 study (42), the studies associated with the research program of Lambert and colleagues were either in category 2 or 5, and all these studies used the OQ-45. The OQ-45 can be used in conjunction with its associated clinical support tools. Previous studies applied clinical support tools with not-on-track patients, which resulted in better treatment outcomes compared with use of patient progress feedback with the OQ-45 only (64). Only one study applied clinical support tools also for patients on track to recovery and found that this did not substantially enhance outcomes (40). Our typology presents a unidimensional outline of intensity of PROM feedback use with clients. Within each category, additional variables were associated with positive therapeutic outcomes, thus creating variability of results within each category of feedback intensity. A formalized structure maximizes the likelihood that feedback is discussed with clients, which appears to drive the beneficial results of PROM use in studies included in category 5. Other aspects of procedural formalization may also be relevant, such as presence of computerized support tools (64), frequency of feedback (44), and whether PROMs are discussed among clinicians (65).

The lack of a feedback effect in studies included in categories 3 and 4 is somewhat surprising but could be related to procedural variations. Newnham and colleagues (53) speculated about whether their delivery of feedback during group therapy may have been qualitatively different from feedback delivered during individual client-clinician interactions. Therapists’ commitment to using PROMs was also found to be related to effectiveness of feedback (45), and the lack of a feedback effect in the other category 4 study (54) may thus be linked to the reportedly low frequency of therapist-initiated PROM discussions in that study. Finally, the feedback effects in the studies by Lambert and colleagues included in category 2 were largely related to clients considered “not on track” (64). With the exception of two studies (32,53), none of the other studies in categories 3 and 4 reported analyses by subgroups, which may have revealed some partial feedback effects.

The ORS questionnaire was the second most frequently used PROM. Three studies reported significant effects of category 5 feedback (55,58,59), and one study reported partial effects (56). The ORS is rarely offered on its own but is typically offered with the Session Rating Scale (SRS) (66), which assesses the therapeutic alliance between client and clinician. Of the four studies that used the ORS, only one did so without also using the SRS (56). The fact that the latter study noted only a partial effect may thus indicate that elements in addition to PROM feedback may be responsible for positive therapeutic outcomes.

Feedback is an integral part of meta-therapeutic dialogue, which, in addition to PROMs, often includes assessment of client needs and preferences and assessment of the therapeutic alliance (67). Although the effects of PROM feedback might be difficult to disentangle from the effects of other aspects of such dialogue-directed approaches, qualitative reports explicitly point to positive experience of PROM feedback. Cheyne and Kinn (50) did not obtain a significant effect in their study involving category 3 PROM feedback, which may have resulted from their small sample. In another article, however, these authors extensively reported on the positive observations of counselors in discussions of PROM scores (36). Counselors found that the Schedule for the Evaluation of Quality of Life (68) functioned well as an aid to client reflection and to enhance therapeutic alliance. Similarly, Sundet (35) reported that completing items on the ORS may trigger very specific reactions, enhancing client-therapist dialogue by initiating, directing, or focusing conversations.

Because of the lack of uniform terms to describe the approach of providing or discussing PROM feedback, the scoping method was chosen to map the field and inform our typology. Most of the articles were not obtained through database searches but through extensive iterative searches of citations and reference lists, manual searches, and searches for specific authors. However, because of the tendency of the scoping method to focus on breadth rather than depth, some relevant articles may have been missed. Unlike previous reviews (7), our search was not limited to articles published in English. Two German-language articles were included (43,48); however, articles in languages other than English and German may have been missed.

Allocation of some articles to the five categories was difficult because of unclear or incomplete information provided. In addition, category allocation was based on reported procedures and not on how PROM feedback may have actually occurred. Studies in the lower categories may have been de facto studies of higher categories if therapists frequently discussed PROM feedback with their clients. Similarly, studies of higher categories may have been de facto studies of lower categories, such as in the case of Puschner and colleagues (54), who reported that clinician-patient discussions rarely occurred despite being planned.

This literature search identified a number of studies that used the Clinical Outcomes in Routine Evaluation instruments (69). These category 1 articles were not included because the inclusion criteria did not extend to articles that reported results from primary care settings. Future reviews may analyze the extensive literature on primary care by using the typology presented here.

Conclusions

This scoping study reviewed studies that reported on the effects of PROM feedback in mental health settings. We provide a procedural typology of intensity of PROM feedback. Unlike previous reviews that reported minimal effects of PROM feedback, this review, which synthesized results on the basis of a procedural typology, showed that the availability of formalized guidelines for clinician-patient discussion of PROM feedback was most highly associated with improved therapeutic outcomes. Certainly, other variables, such as the presence of computerized support tool software (64) or the frequency of feedback (44), are also related to positive therapeutic outcomes, and these can be integrated into the typology presented here as factors that affect variability of results within each category of feedback intensity.

Use of PROMs supports patient-centered care (6) because it recognizes patients as participant consumers who should be active in plans and decisions about treatment options. Qualitative reports favor the use of PROM discussion, such as by enhancing clinician-patient communication and providing clients with mechanisms for reflective practice (36,70). However, because therapeutic approaches to discussing PROM feedback with clients tend to occur in conjunction with a general emphasis on the therapeutic alliance and meta-therapeutic dialogue, future work is required to isolate the effects of PROM feedback from such procedural correlates.

The authors are with the Auckland University of Technology, Auckland, New Zealand. Dr. Krägeloh, Prof. Billington, and Prof. Siegert are with the Department of Psychology and the Centre for Person Centred Research, and Mr. Czuba and Prof. Kersten are with the Centre for Person Centred Research (e-mail: ).

This study was funded by a contestable grant from the Faculty of Health and Environmental Sciences, Auckland University of Technology.

The authors report no financial relationships with commercial interests.

References

1 Bobbitt BL, Cate RA, Beardsley SD, et al.: Quality improvement and outcomes in the future of professional psychology: opportunities and challenges. Professional Psychology, Research and Practice 43:551–559, 2012CrossrefGoogle Scholar

2 Dawson J, Doll H, Fitzpatrick R, et al.: The routine use of patient reported outcome measures in healthcare settings. BMJ 340:c186, 2010Crossref, MedlineGoogle Scholar

3 Greenhalgh J, Long AF, Flynn R: The use of patient reported outcome measures in routine clinical practice: lack of impact or lack of theory? Social Science and Medicine 60:833–843, 2005Crossref, MedlineGoogle Scholar

4 Marshall S, Haywood K, Fitzpatrick R: Impact of patient-reported outcome measures on routine practice: a structured review. Journal of Evaluation in Clinical Practice 12:559–568, 2006Crossref, MedlineGoogle Scholar

5 Valderas JM, Kotzeva A, Espallargues M, et al.: The impact of measuring patient-reported outcomes in clinical practice: a systematic review of the literature. Quality of Life Research 17:179–193, 2008Crossref, MedlineGoogle Scholar

6 Greenhalgh J: The applications of PROs in clinical practice: what are they, do they work, and why? Quality of Life Research 18:115–123, 2009Crossref, MedlineGoogle Scholar

7 Boyce MB, Browne JP: Does providing feedback on patient-reported outcomes to healthcare professionals result in better outcomes for patients? A systematic review. Quality of Life Research 22:2265–2278, 2013Crossref, MedlineGoogle Scholar

8 Hawkins EJ, Lambert MJ, Vermeersch DA, et al.: The therapeutic effects of providing patient progress information to therapists and patients. Psychotherapy Research 14:308–327, 2004CrossrefGoogle Scholar

9 Lambert MJ, Burlingame GM, Umphress V, et al.: The reliability and validity of the Outcome Questionnaire. Clinical Psychology and Psychotherapy 3:249–258, 1996CrossrefGoogle Scholar

10 Anthony WA: Recovery from mental illness: the guiding vision of the mental health system in the 1990s. Psychosocial Rehabilitation Journal 16:11–23, 1993CrossrefGoogle Scholar

11 Deegan PE: Recovery as a self-directed process of healing and transformation. Occupational Therapy in Mental Health 17:5–21, 2002CrossrefGoogle Scholar

12 Arksey H, O’Malley L: Scoping studies: towards a methodological framework. International Journal of Social Research Methodology 8:19–32, 2005CrossrefGoogle Scholar

13 Levac D, Colquhoun H, O’Brien KK: Scoping studies: advancing the methodology. Implementation Science 5:69, 2010Crossref, MedlineGoogle Scholar

14 Davis K, Drey N, Gould D: What are scoping studies? A review of the nursing literature. International Journal of Nursing Studies 46:1386–1400, 2009Crossref, MedlineGoogle Scholar

15 Antunes B, Harding R, Higginson IJ: Implementing patient-reported outcome measures in palliative care clinical practice: a systematic review of facilitators and barriers. Palliative Medicine 28:158–175, 2014Crossref, MedlineGoogle Scholar

16 Wilczynski NL, Haynes RB: Optimal search strategies for identifying mental health content in MEDLINE: an analytic survey. Annals of General Psychiatry 5:4, 2006Crossref, MedlineGoogle Scholar

17 Burton L-J, Tyson S, McGovern A: Staff perceptions of using outcome measures in stroke rehabilitation. Disability and Rehabilitation 35:828–834, 2013Crossref, MedlineGoogle Scholar

18 Coombs T, Stapley K, Pirkis J: The multiple uses of routine mental health outcome measures in Australia and New Zealand: experiences from the field. Australasian Psychiatry 19:247–253, 2011Crossref, MedlineGoogle Scholar

19 Delaffon V, Anwar Z, Noushad F, et al.: Use of Health of the Nation Outcome Scales in psychiatry. Advances in Psychiatric Treatment 18:173–179, 2012CrossrefGoogle Scholar

20 Greenhalgh J, Flynn R, Long AF, et al.: Tacit and encoded knowledge in the use of standardised outcome measures in multidisciplinary team decision making: a case study of in-patient neurorehabilitation. Social Science and Medicine 67:183–194, 2008Crossref, MedlineGoogle Scholar

21 Hatfield DR, Ogles BM: The use of outcome measures by psychologists in clinical practice. Professional Psychology, Research and Practice 35:485–491, 2004CrossrefGoogle Scholar

22 James M, Kehoe R: Using the Health of the Nation Outcome Scales in clinical practice. Psychiatric Bulletin 23:536–538, 1999CrossrefGoogle Scholar

23 Patterson P, Matthey S, Baker M: Using mental health outcome measures in everyday clinical practice. Australasian Psychiatry 14:133–136, 2006Crossref, MedlineGoogle Scholar

24 Prabhu R, Oakley Browne M: The use of the Health of the Nation Outcome Scale in an outreach rehabilitation program. Australasian Psychiatry 16:195–199, 2008Crossref, MedlineGoogle Scholar

25 Rey JM, Grayson D, Mojarrad T, et al.: Changes in the rate of diagnosis of major depression in adolescents following routine use of a depression rating scale. Australian and New Zealand Journal of Psychiatry 36:229–233, 2002Crossref, MedlineGoogle Scholar

26 Skinner A, Turner-Stokes L: The use of standardized outcome measures in rehabilitation centres in the UK. Clinical Rehabilitation 20:609–615, 2006Crossref, MedlineGoogle Scholar

27 Stevens AM, Gwilliam B, A’hern R, et al.: Experience in the use of the palliative care outcome scale. Supportive Care in Cancer 13:1027–1034, 2005Crossref, MedlineGoogle Scholar

28 Tavabie JA, Tavabie OD: Improving care in depression: qualitative study investigating the effects of using a mental health questionnaire. Quality in Primary Care 17:251–261, 2009MedlineGoogle Scholar

29 Sprangers MAG, Hall P, Morisky DE, et al.: Using patient-reported measurement to pave the path towards personalized medicine. Quality of Life Research 22:2631–2637, 2013Crossref, MedlineGoogle Scholar

30 Harmon SC, Lambert MJ, Smart DM, et al.: Enhancing outcome for potential treatment failures: therapist-client feedback and clinical support tools. Psychotherapy Research 17:379–392, 2007CrossrefGoogle Scholar

31 Slade K, Lambert MJ, Harmon SC, et al.: Improving psychotherapy outcome: the use of immediate electronic feedback and revised clinical support tools. Clinical Psychology and Psychotherapy 15:287–303, 2008Crossref, MedlineGoogle Scholar

32 de Jong K, Timman R, Hakkaart-van Roijen L, et al.: The effect of outcome monitoring feedback to clinicians and patients in short and long-term psychotherapy: a randomized clinical trial. Psychotherapy Research 24:629–639, 2014Crossref, MedlineGoogle Scholar

33 Asay TP, Lambert MJ, Gregersen AT, et al.: Using patient-focused research in evaluating treatment outcome in private practice. Journal of Clinical Psychology 58:1213–1225, 2002Crossref, MedlineGoogle Scholar

34 Oades L, Deane F, Crowe T, et al.: Collaborative recovery: an integrative model for working with individuals who experience chronic and recurring mental illness. Australasian Psychiatry 13:279–284, 2005MedlineGoogle Scholar

35 Sundet R: Postmodern-oriented practices and patient-focused research: possibilities and hazards. Australian and New Zealand Journal of Family Therapy 33:299–308, 2012CrossrefGoogle Scholar

36 Cheyne A, Kinn S: Counsellors' perspectives on the use of the Schedule for the Evaluation of Individual Quality of Life (SEIQoL) in an alcohol counselling setting. British Journal of Guidance and Counselling 29:35–46, 2001CrossrefGoogle Scholar

37 Lambert MJ, Whipple JL, Bishop MJ, et al.: Comparison of empirically-derived and rationally-derived methods for identifying patients at risk for treatment failure. Clinical Psychology and Psychotherapy 9:149–164, 2002CrossrefGoogle Scholar

38 Halford WK, Hayes S, Christensen A, et al.: Toward making progress feedback an effective common factor in couple therapy. Behavior Therapy 43:49–60, 2012Crossref, MedlineGoogle Scholar

39 Probst T, Lambert MJ, Loew TH, et al.: Feedback on patient progress and clinical support tools for therapists: improved outcome for patients at risk of treatment failure in psychosomatic in-patient therapy under the conditions of routine practice. Journal of Psychosomatic Research 75:255–261, 2013Crossref, MedlineGoogle Scholar

40 Probst T, Lambert MJ, Dahlbender RW, et al.: Providing patient progress feedback and clinical support tools to therapists: is the therapeutic process of patients on-track to recovery enhanced in psychosomatic in-patient therapy under the conditions of routine practice? Journal of Psychosomatic Research 76:477–484, 2014Crossref, MedlineGoogle Scholar

41 Christensen A, Atkins DC, Berns S, et al.: Traditional versus integrative behavioral couple therapy for significantly and chronically distressed married couples. Journal of Consulting and Clinical Psychology 72:176–191, 2004Crossref, MedlineGoogle Scholar

42 Hannan C, Lambert MJ, Harmon C, et al.: A lab test and algorithms for identifying clients at risk for treatment failure. Journal of Clinical Psychology 61:155–163, 2005Crossref, MedlineGoogle Scholar

43 Berking M, Orth U, Lutz W: How effective is systematic feedback of treatment progress to the therapist? An empirical study in a cognitive-behavioural-oriented impatient setting [in German]. Zeitschrift für Klinische Psychologie und Psychotherapie 35:21–29, 2006CrossrefGoogle Scholar

44 Bickman L, Kelley SD, Breda C, et al.: Effects of routine feedback to clinicians on mental health outcomes of youths: results of a randomized trial. Psychiatric Services 62:1423–1429, 2011LinkGoogle Scholar

45 de Jong K, van Sluis P, Nugter MA, et al.: Understanding the differential impact of outcome monitoring: therapist variables that moderate feedback effects in a randomized clinical trial. Psychotherapy Research 22:464–474, 2012Crossref, MedlineGoogle Scholar

46 Lambert MJ, Whipple JL, Smart DW, et al.: The effects of providing therapists with feedback on patient progress during psychotherapy: are outcomes enhanced? Psychotherapy Research 11:49–68, 2001Crossref, MedlineGoogle Scholar

47 Lambert MJ, Whipple JL, Vermeersch DA, et al.: Enhancing psychotherapy outcomes via providing feedback on client progress: a replication. Clinical Psychology and Psychotherapy 9:91–103, 2002CrossrefGoogle Scholar

48 Lutz W, Wittmann WW, Böhnke JR, et al.: Results from the pilot project of the Techniker Krankenkasse (TK) (quality monitoring in outpatient psychotherapy: the evaluators’ perspective) [in German]. Psychotherapie, Psychosomatik, Medizinische Psychologie 62:413–417, 2012Crossref, MedlineGoogle Scholar

49 Whipple JL, Lambert MJ, Vermeersch DA, et al.: Improving the effects of psychotherapy: the use of early identification of treatment failure and problem-solving strategies in routine practice. Journal of Counseling Psychology 50:59–68, 2003CrossrefGoogle Scholar

50 Cheyne A, Kinn S: A pilot study for a randomised controlled trial of the use of the Schedule for the Evaluation of Individual Quality of Life (SEIQoL) in an alcohol counselling setting. Addiction Research and Theory 9:165–178, 2001CrossrefGoogle Scholar

51 Hansson H, Rundberg J, Österling A, et al.: Intervention with feedback using Outcome Questionnaire 45 (OQ-45) in a Swedish psychiatric outpatient population: a randomized controlled trial. Nordic Journal of Psychiatry 67:274–281, 2013Crossref, MedlineGoogle Scholar

52 Slade M, McCrone P, Kuipers E, et al.: Use of standardised outcome measures in adult mental health services: randomised controlled trial. British Journal of Psychiatry 189:330–336, 2006Crossref, MedlineGoogle Scholar

53 Newnham EA, Hooke GR, Page AC: Progress monitoring and feedback in psychiatric care reduces depressive symptoms. Journal of Affective Disorders 127:139–146, 2010Crossref, MedlineGoogle Scholar

54 Puschner B, Schöfer D, Knaup C, et al.: Outcome management in in-patient psychiatric care. Acta Psychiatrica Scandinavica 120:308–319, 2009Crossref, MedlineGoogle Scholar

55 Anker MG, Duncan BL, Sparks JA: Using client feedback to improve couple therapy outcomes: a randomized clinical trial in a naturalistic setting. Journal of Consulting and Clinical Psychology 77:693–704, 2009Crossref, MedlineGoogle Scholar

56 Murphy KP, Rashleigh CM, Timulak L: The relationship between progress feedback and therapeutic outcome in student counselling: a randomised control trial. Counselling Psychology Quarterly 25:1–18, 2012CrossrefGoogle Scholar

57 Priebe S, McCabe R, Bullenkamp J, et al.: Structured patient-clinician communication and 1-year outcome in community mental healthcare: cluster randomised controlled trial. British Journal of Psychiatry 191:420–426, 2007Crossref, MedlineGoogle Scholar

58 Reese RJ, Norsworthy LA, Rowlands SR: Does a continuous feedback system improve psychotherapy outcome? Psychotherapy 46:418–431, 2009Crossref, MedlineGoogle Scholar

59 Reese RJ, Toland MD, Slone NC, et al.: Effect of client feedback on couple psychotherapy outcomes. Psychotherapy 47:616–630, 2010Crossref, MedlineGoogle Scholar

60 Schmidt U, Landau S, Pombo-Carril MG, et al.: Does personalized feedback improve the outcome of cognitive-behavioural guided self-care in bulimia nervosa? A preliminary randomized controlled trial. British Journal of Clinical Psychology 45:111–121, 2006Crossref, MedlineGoogle Scholar

61 Simon W, Lambert MJ, Harris MW, et al.: Providing patient progress information and clinical support tools to therapists: effects on patients at risk of treatment failure. Psychotherapy Research 22:638–647, 2012Crossref, MedlineGoogle Scholar

62 Simon W, Lambert MJ, Busath G, et al.: Effects of providing patient progress feedback and clinical support tools to psychotherapists in an inpatient eating disorders treatment program: a randomized controlled study. Psychotherapy Research 23:287–300, 2013Crossref, MedlineGoogle Scholar

63 Miller SD, Duncan BL, Brown J, et al.: The Outcome Rating Scale: a preliminary study of the reliability, validity, and feasibility of a brief visual analog measure. Journal of Brief Therapy 2:91–100, 2003Google Scholar

64 Shimokawa K, Lambert MJ, Smart DW: Enhancing treatment outcome of patients at risk of treatment failure: meta-analytic and mega-analytic review of a psychotherapy quality assurance system. Journal of Consulting and Clinical Psychology 78:298–311, 2010Crossref, MedlineGoogle Scholar

65 Reese RJ, Usher EL, Bowman DC, et al.: Using client feedback in psychotherapy training: an analysis of its influence on supervision and counselor self-efficacy. Training and Education in Professional Psychology 3:157–168, 2009CrossrefGoogle Scholar

66 Duncan BL, Miller SD, Sparks JA, et al.: The Session Rating Scale: preliminary psychometric properties of a “working” alliance measure. Journal of Brief Therapy 3:3–12, 2003Google Scholar

67 Bowens M, Cooper M: Development of a client feedback tool: a qualitative study of therapists’ experiences of using the Therapy Personalisation Forms. European Journal of Psychotherapy and Counselling 14:47–62, 2012CrossrefGoogle Scholar

68 Joyce CRB, Hickey A, McGee HM, et al.: A theory-based method for the evaluation of individual quality of life: the SEIQoL. Quality of Life Research 12:275–280, 2003Crossref, MedlineGoogle Scholar

69 Evans C, Mellor-Clark J, Margison F, et al.: CORE: Clinical Outcomes in Routine Evaluation. Journal of Mental Health 9:247–255, 2000CrossrefGoogle Scholar

70 Sundet R: Collaboration: family and therapist perspectives of helpful therapy. Journal of Marital and Family Therapy 37:236–249, 2011Crossref, MedlineGoogle Scholar