A series of high-quality research studies have examined case management (1,2). Overall, they show case management as equal or superior to standard care. However, over time, the relative advantage conferred by case management has not been consistently found, and several British studies in the past five years have found no benefits (3,4,5).
A problem in interpreting these studies is a lack of certainty about what case management involves (6). The functions and workings of intensive case managers in the United Kingdom need to be better defined. This study aimed to identify a set of categories with which to describe the clinical work practice of intensive case managers. Many studies have looked at the effectiveness of case management and variants of this approach to care, which are referred to as intensive case management and assertive community treatment. Studies of the relative effectiveness of different approaches have been done without a consensus about what each approach entails. To describe further the similarities and differences in treatment models, we sought to create categories to define the different routine activities of care workers in different treatment programs.
Defining models is invariably a difficult and complex task (7), particularly for social programs. Once a model is defined, its components must be operationalized if model fidelity is to be measured. In operationalizing the components of a model, it is essential to record what actually happens. However, the reduction of complex, multidisciplinary practice into a unified practice model is far from straightforward. Team members from differing professional backgrounds hold a range of beliefs about current health care practice. Client groups have become increasingly powerful in shaping service models. The perspectives of managers and planners may differ further still. The involvement of practitioners in producing a set of categories to define practice is essential, and their commitment and cooperation are vital if a workable set of practice categories is to be produced and data collected.
This paper reports the essential first step of developing categories for describing actual clinical practice. The understanding of practice that can be obtained using these categories is an essential foundation on which detailed measurement of model fidelity may be grounded.
St. George's Mental Health Services in South London, United Kingdom, established an experimental intensive case manager service in 1994 (8). The new service comprised a network of eight experienced mental health professionals, including two occupational therapists and six mental health nurses. These workers were attached to four local community mental health teams, each of which had two psychiatrists and three or four community mental health nurses. The teams also had input from occupational therapists and clinical psychologists.
The service model is based on the one described by Stein and Test (9). Caseloads are restricted to 12 clients to enable case managers to have frequent contact with clients and caregivers in their homes and neighborhoods. The care program approach is used for all clients (10). Clients are involved to varying degrees in drafting and reviewing their care plans with the community mental health teams to which the intensive case managers are attached. A client is assigned to one team member, but the team has overall responsibility for a client. The intensive case manager service was evaluated as part of a four-site randomized trial that involved 708 subjects with a diagnosis of psychosis (11).
Limited caseloads enabled the intensive case managers to depart from existing working practices and to cross traditional professional boundaries, which highlighted the problem of the lack of shared language. Innovative working practices were at risk of remaining hidden if staff activity was recorded in vague or inappropriate terms. The intensive case managers decided to use a computerized care programming system to manage their caseloads, which reinforced the need for a detailed set of acceptable and understandable activity categories. A Delphi process was chosen to identify a set of clinician-generated categories that could be used to classify their common interventions.
A well-executed Delphi process (12,13) provides an effective structure for group communication. It uses controlled feedback to measure group consensus while allowing repeated cycles of review. The method is well suited to learning and clarifying the insights of busy clinicians and has been used to clarify the essential components of care in schizophrenia (14).
A three-round conventional Delphi method (13) was used in this study. The Delphi process was administered by a nonclinical researcher (the first author) familiar with the case managers' work but with no responsibility for it. All eight intensive case managers participated. Each round was based on a questionnaire distributed by the researcher, who remained in the room to explain and guide the process.
On an anonymous brainstorming questionnaire, participants were asked to suggest practice categories that would be used to tailor their care programming software. Participants each anonymously listed eight or more main categories of clients' needs. A total of 78 responses were obtained, which were reduced to 38 separate categories after removal of exact duplicates by the researcher.
In the second round, a list of the 38 categories suggested in the first round was presented, and participants were asked to rate their relative importance on a scale of 1 to 5, with 1 indicating essential; 2, very important; 3, important; 4, less important; and 5, unimportant. Participants were informed that the aim of the exercise was to arrive at a set of eight or so categories to describe the most common and important client needs and interventions to address those needs. Although additional categories were permitted, none were proposed.
In the third round, a list of the categories with both the participants' own ratings and the group's median ratings was presented. Participants were asked to re-rate each category in the light of this new information. They were asked for comments in cases in which their final rating differed by more than 2 points from the group median. The results from the Delphi process were then presented in a structured discussion group.
Five of the eight intensive case managers took part in a semistructured discussion exercise. They were given result forms containing the 38 categories, all comments made, and the participant's own final rating for each category as well as the group's final median rating. The authors explained that the purpose of the discussion exercise was to reduce the 38 categories to a comprehensive and mutually exclusive list of ten items. The participants were divided into two groups to use the Delphi results to suggest a set of eight categories to describe their clinical practice. This exercise produced 13 different categories.
Working together, the groups then used amalgamations of similar categories or expansion of existing categories to reduce the number of categories to ten, while retaining their comprehensiveness and mutual exclusiveness. Each suggestion reported to the group was discussed at length and in detail to clarify the practices it included. Although the researchers structured the thorough discussion of participants' clinical experiences, the decisions were made entirely by the participants.
A check for clinical adequacy was performed a week after the discussion group. All eight original participants were given a copy of the ten categories together with their operational definitions and detailed examples of the activities included under each category. Each was asked, "Are there any areas of work or clients' needs that you address that are not covered by these categories?"
The degree of consensus about the importance of each component was assessed on completion of the third Delphi round by noting the number of participants whose final rating was within one point of the group median.
In round 2, participants used the entire range of possible scores when rating the 38 components, with the result that the median scores ranged from 1, very important, to 4, less important. A high rate of consensus was reached in participants' final Delphi ratings. Thirty-one of the 38 components (82 percent) received final ratings from at least seven of the eight participants that fell within 1 point of the median score. For 17 of these components (45 percent), all eight participants' ratings fell within 1 point of the median score.
A final set of ten categories was agreed to in the discussion group. They were housing, finance, daily living skills, criminal justice system, occupation and leisure, engagement, physical health, caregivers and significant others, specific mental health intervention or assessment, and medication.
The group also agreed on definitions of activities in each category. For example, the occupation and leisure category was defined as "Organizing, planning, or encouraging daytime structure and leisure activities; accompanying the client to mainstream leisure activities; accompanying the client to the day center; vocational planning and assistance; and helping the client build relationships." Medication was defined as "Administering depot medications, arranging adjustment or review of medication, monitoring compliance with medication, educating or negotiating with the client to enhance compliance, formally assessing side effects, and supplying medication." No participants indicated any concerns with the completeness of the categories during the adequacy check.
The Delphi approach allowed clinicians a free hand in suggesting practice-based categories and encouraged them to think broadly. It provided a structure for reducing the 78 suggestions elicited (38 after exact duplicates were removed) to a comprehensive and mutually exclusive set of ten areas of intervention to which all agreed.
Clinicians are uniquely placed to define explanatory practice categories that accommodate the perspectives of service users, staff, and managers. Such clinician-generated categories draw on their expertise and suffer less in practice from the problems associated with externally imposed categories. Practitioners do not have to hammer the round pegs of their complex clinical interventions into the square holes of often arbitrary, incomplete, and overlapping administrative categories. Categories based on clinical practice are more likely to encompass and accommodate the precise nature of their work. The additional energy needed in a monitoring exercise is minimized, and the reliability of data is increased.
The case managers report that these categories provide a helpful structure to guide their care programming. These guiding benefits illustrate the importance of practice models that are derived from and that match day-to-day good clinical practice.
Our process produced a set of ten categories that have been incorporated in a helpful and easy-to-use form. Although the categories are potentially limited in that they are based on the practices of one team, every team member participated in the process, and the categories were found to be acceptable to staff when used to gather research data about clinical practice both by our participating team and at three other collaborating research sites. The categories have also made it possible to compare the activities of our intensive case managers with those of four U.S. teams in New Hampshire, whose high fidelity to assertive community treatment principles has been confirmed (15). This approach has great potential both for monitoring and evaluating the "black box" (16) of current practice and as a means for facilitating model replication.
The Delphi-based method used here represents an effective, straightforward, and time-efficient way of obtaining a workable consensus about a complex issue at the interface of clinical theory and practice. The Delphi exercise itself enabled early achievement of a group consensus because the views of each participant were equally weighted and contributions were uninhibited by group dynamics. Dominant personalities were "neutralized" by a structure that valued each contribution equally.
The semistructured discussion moved forward from this consensus to achieve a meaningful, workable set of practice-based categories. The detailed discussions revolved around participants' specific clinical experiences with real patients, which enabled the practice-based categories to be divided into operationalized subcategories. Several participants commented favorably on the productivity of this exercise. We would recommend this approach as an effective counterbalance to much of the top-down style of current information gathering.
Mr. Fiander is a research fellow and Dr. Burns is professor of community psychiatry in the department of general psychiatry at St. George's Hospital Medical School, Cranmer Terrace, Tooting, London SW17 0RE, England. Send correspondence to Dr. Burns (e-mail, firstname.lastname@example.org).