Betty J. Blythe
Single-system design is just one of several names for a research methodology
that allows a social work practitioner to track systematically his
or her progress with a client or client unit. With increasingly rigorous
applications of this methodology, practitioners can also gain knowledge
about effective social work interventions, although this is a less common
Data collected across time allow the worker to examine the client's response to intervention. If possible, data are collected before social work intervention begins to provide an indication of the baseline level of the problem, and then one or more interventions are implemented. Continuous data collection allows the practitioner to examine the client's response to treatment and, if necessary, make modifications to the treatment plan.
Single-system design offers an alternative to the unsystematic case study as a means of documenting and evaluating a client's progress (Kazdin, 1982). Other names for this methodology include time-series research, N = 1, idiographic research, single-organism research, nomothetic research, single-case design, and single-subject design. Note that the “system” can be a client or a client unit.
The idea of evaluating practice with single-system design is a relatively new one in social work. In the early 1970s distinguished leaders in social work noted that the profession needed to address issues related to accountability (Briar, 1973; Newman & Turem, 1974). Briar, in particular, called on social work practitioners to evaluate whether their interventions were helping clients.
The single-system design was essentially borrowed from psychology, where it was being applied in controlled research, often in laboratory settings or in classrooms (Campbell & Stanley, 1966; Hersen & Barlow, 1976). Psychologists studying the effects of behavioral interventions on a small number of subjects (often only one subject) frequently applied this methodology in their research. For instance, O'Brien and Azrin (1973) examined the effects of different methods of inviting family members to visit three inpatients on a mental health hospital ward. Mills, Agras, Barlow, and Mills (1973) used single-system design methods to study a behavioral intervention known as response prevention with five individuals who engaged in compulsive behaviors such as hand washing. These two examples are typical of many such studies in that they evaluated highly specified interventions that were aimed at changing discrete behaviors and in environments that afforded considerable control over extraneous factors.
At that time, psychologists were also interested in the idea of using single-system design in clinical settings. Again, they were primarily testing well-specified, circumscribed behavioral interventions. These were the models that were available when social work practitioners began to discuss evaluating direct practice with single-system designs in the 1970s.
In retrospect, it is not surprising that early efforts to evaluate practice with single-system designs led practitioners to voice some concerns about the appropriateness of this approach. On the basis of the standards of research in behavioral psychology, students and practitioners were taught that more-rigorous designs were more desirable without really knowing whether such designs could be implemented in routine social work practice. Unfortunately, this meant that practice was made to fit the confines of rigorous research designs and was not allowed to evolve naturally. For example, the withdrawal design, which was thought to be rigorous in demonstrating that the intervention was responsible for the observed data patterns, called for withdrawing the intervention for a period of time. This practice, however, might be unethical or impossible in practice. Similarly, students were taught that observational measures (whose reliability could be determined) or standardized measures with known reliability were preferable. However, some practitioners argued that certain client problems or situations are not easily measured by such means and may be better measured with tailor-made tools. Furthermore, the more rigorous application of single-subject design called for equal phase lengths. Thus, changes in the treatment plan were constrained by the intended length of phases, a practice that is not always in line with the treatment plan or the needs of the client.
Concerns about Validity
Over time social work practitioners came to realize that attempts to force practice and decisions about practice to fit the demands of a particular research design or measurement strategy distorted practice. As they became more flexible in their teaching of single-system designs, however, some methodologists became concerned about the extent to which research designs are adapted to fit the realities of practice, about the type of designs being used, and about the measurement tools being used in some instances. Were the findings valid and reliable? Were they generalizable? Ultimately, practitioners realized that the answers to these questions depend on the goals of the practitioner. Is the practitioner using single-system design to monitor a client's progress, inform practice decisions, and systematize her or his own thinking? Or is the practitioner attempting to document the effectiveness of a particular intervention and explore the extent to which this intervention might be effective in addressing the same concerns with other clients? If the latter is the goal, then methodological issues clearly must be examined carefully. Fortunately, most practitioners are more concerned with monitoring client progress and gathering information to inform their decision making, and most scholars and teachers are more aware of that goal as they teach and write about practice evaluation with single-system methods (Proctor, 1990).
ADVANTAGES OF SINGLE-SYSTEM DESIGNS
If practitioners accept that their objective in using single-system designs is to monitor their client's progress and to help them make practice decisions, how can these methods be helpful? Perhaps the most significant contribution of single-system designs is to provide a framework that guides the practitioner through the assessment and specification of a client's goals, implementation of interventions, and termination and follow-up. Although these are nothing more than the phases of practice, applying the components of single-system designs to these phases provides a degree of structure and some additional tools to aid in the process.
In the assessment phase, the information typically gathered by a social worker is enhanced by enlisting some of the measurement strategies suggested by single-system design methodology (Blythe & Tripodi, 1989). For example, obtaining multiple types of information from multiple sources helps to reduce bias and yields a more-accurate description of the client's situation. Applying principles of reliability to interviews can result in more-accurate information. The process of selecting and specifying measurement devices also helps the client and practitioner understand more fully all aspects of the presenting problem. Visually presenting the assessment data in a graph or other format, such as an ecomap, may focus the attention and increase the understanding of all the parties involved (Mattaini, 1993).
Goals and Intervention Plans
Perhaps the greatest potential for single-system designs to enhance practice is in the area of establishing goals and intervention plans, particularly with so-called multiproblem clients (Ivanoff, Blythe, & Tripodi, 1994). Because targets for intervention (that is, a client's goals) must be selected and carefully specified, goal setting receives more attention than it often does in routine practice. Goals must be clearly articulated so that appropriate dependent measures can be identified. Moreover, practitioners are less likely to lose sight of important goals when clients meet with a seemingly inevitable series of crises. Although it sometimes is necessary to rethink and revise goals, the framework imposed by single-system designs makes it more likely that the practitioner will make this decision in a thoughtful manner. Finally, the clear identification of dependent and independent (treatment) variables required by single-system designs reduces the chance that practitioners will confuse the two. Without such a structure, practitioners sometimes write treatment goals that are nothing more than a restatement of what they intend to teach clients as part of the intervention.
By examining the data patterns resulting from the ongoing collection of assessment data, the practitioner has systematic information to help him or her make decisions about treatment plans as well as about continuing, discontinuing, or revising intervention strategies (Berlin & Marsh, 1993; Rosen, 1992). Ideally, there are multiple sources of information that reflect the complexity of most client's goals. Critical incidents that may explain or be associated with extreme data points or patterns can be graphed and analyzed. When all goals have been sufficiently met, the decision is likely to be termination. Finally, follow-up is facilitated when assessment measurement strategies that were used during the initial and ongoing assessment are applied to determine whether gains have been maintained.
Another advantage, from a treatment perspective, is that applying single-case methodology in practice underscores the practitioner's responsibility for goal attainment. The practitioner is actually monitoring his or her ability to help clients achieve goals. At the same time, the client is involved in identifying target problems and goals as assessment is carried out and in evaluating progress as treatment plans are implemented. This type of collegial relationship promotes the client's engagement and involvement in treatment (Blythe, 1990).
Debates about Single-System Designs
Despite the enthusiasm about the advantages of applying single-system designs in practice, several debates about this approach have taken place. These debates can be divided into three general areas, although there is overlap in the arguments. Although some of this discussion suggests that practice simply is not improved by single-system designs and other quantitative research tools, most arguments contend that other research methods are better suited for practice.
Distortions in Practice
The first area of debate relates to the charge that single-system methods distort practice (Kagle, 1982; Thomas, 1978). To be sure, when treatment plans are dictated by the need to withhold, delay, or present treatments at certain times or for certain intervals, practice is distorted. Early teaching and writing on single-system designs presented the methodology as rigid and as making some demands on practice. Indeed, scholars such as Thomas and Kagle helped sharpen the profession's thinking about implementing single-system research methods in practice. Unfortunately, the more recent opponents of single-system design do not appear to acknowledge or perhaps understand that the methodology is now viewed as highly flexible and as unfolding as practice decisions are made. In the current view, implementing single-case methods and applying other research tools in practice do not distort practice because it is the practice actions and needs that dictate research designs and the selection of research methods and tools (Collins, Kayser, & Platt, 1994; Corcoran, 1993; Gingerich, 1990; Proctor, 1990).
The remaining two areas of debate about implementing single-system designs in practice take issue with the methodology and suggest that other methods or worldviews are more appropriate. The arguments are based on slightly different grounds.
Male voice versus female voice.
Male voice versus female voice. The first, posed by Davis (1985), asserts that implementing single-system methods in practice is to speak in the “male voice.” Davis's premise is that the components of empirical clinical practice, such as monitoring a client's progress with quantitative indicators and objective instruments, are incompatible with the female view of the world. Davis argued that the female voice should be heard and valued in discussions of practice, especially those related to evaluating practice.
Quantitative versus qualitative methodology.
At the risk of oversimplification, the other set of debates about implementing single-system methods in practice contends that practice is much too complex to be examined in cause–effect terms and with quantitative measures and single-system designs (Heineman, 1981; Ruckdeschel & Farris, 1981; Witkin, 1992). For the most part, the alternatives suggested by these critics for integrating research and social work practice are vague. For example, Witkin suggested that such integration should “include naturalistic, interpretive, and critical approaches” (p. 267).
The suggestions of these critics typically involve qualitative research methodology. Notably, some scholars have described specific means of applying qualitative methods to practice, sometimes together with single-system methods (Millstein, 1993; Rodwell, 1990). Much work remains, however, to move this line of thinking from the realm of suggestion to that of concrete proposals with procedural descriptions, case examples, and teaching materials.
Early in the profession's efforts to integrate practice and research, some innovative curriculum designs were developed to bring together practice, research, and field components (Downs & Robertson, 1983; Gottlieb & Richey, 1980). Rather than bolster and improve these innovations, anecdotal data suggest that these efforts have been diminished, partly because of other curriculum demands and faculty workload concerns (Blythe & Rodgers, 1993). Renewed attention and energy must be directed toward identifying creative, realistic, and flexible methods for preparing social workers to apply single-system methods to their practice. In particular, special attention must be paid to the means of involving field education in this challenge, so that students have actual experiences in evaluating their practice with single-system methods.
These proposals that other methods of research inquiry can enhance practice must be fully articulated before any suggestions can actually be implemented. Practitioners' experience in applying single-system research methods to practice is instructive. Clearly, these new initiatives will take considerable time to develop, refine, and disseminate. Ideally, the proponents of these innovations will identify and emphasize the congruency and complementarity with other forms of integrating research and practice, rather than emphasize the differences.
Finally, the profession must continue to develop institutional supports for practitioners who are attempting to integrate research and practice. Computer hardware and software technology, administrative data management systems, and peer review are just a few examples of the ways in which agency administrators can structure supports for these undertakings (Bronson & Blythe, 1987; Mattaini, 1993; Mutschler & Jayaratne, 1993; Nurius & Hudson, 1993).
The social work profession has come a long way in its understanding of how to integrate research methods, especially single-system designs, into practice. It is time to move beyond the debates and the rhetoric. Practitioners must work together to advance their understanding of and ability to apply this technology, as well as other research methods, in their work with clients. Clearly, social work practice and clients will benefit from these endeavors.
Berlin, S. B., & Marsh, J. C. (1993). Informing practice decisions. New York: Macmillan.
Blythe, B. J. (1990). Applying practice research methods in intensive family preservation services. In J. K. Whittaker, J. Kinney, E. M. Tracy, & C. Booth (Eds.), Reaching high-risk families: Intensive family preservation in human services (pp. 147–164). New York: Aldine de Gruyter.
Blythe, B. J., & Rodgers, A. Y. (1993). Evaluating our own practice: Past, present, and future trends. Journal of Social Service Research, 18, 101–119.
Blythe, B. J., & Tripodi, T. T. (1989). Measurement in direct social work practice. Newbury Park, CA: Sage Publications.
Briar, S. (1973). Effective social work intervention in direct practice: Implications for education. In Facing the challenge: Plenary session papers from the 19th Annual Program Meeting. New York: Council on Social Work Education.
Bronson, D. E., & Blythe, B. J. (1987). Computer support for single-case evaluation of practice. Social Work Research & Abstracts, 23(3), 10–13.
Campbell, D. T., & Stanley, J. C. (1966). Experimental and quasi-experimental designs for research. Chicago: Rand McNally.
Collins, P. M., Kayser, K., & Platt, S. (1994). Conjoint marital therapy: A practitioner's approach to single-system evaluation. Families in Society, 75, 131–141.
Corcoran, K. J. (1993). Practice evaluation: Problems and promises of single-system designs in clinical practice. Journal of Social Service Research, 18, 147–159.
Davis, L. V. (1985). Female and male voices in social work. Social Work, 30, 106–113.
Downs, W. R., & Robertson, J. F. (1983). Preserving the social work perspective in the research sequence: State of the art and program models for the 1980s. Journal of Education for Social Work, 19(1), 87–95.
Gingerich, W. (1990). Rethinking single-case evaluation. In L. Videka-Sherman & W. J. Reid (Eds.), Advances in clinical social work research (pp. 11–24). Silver Spring, MD: NASW Press.
Gottlieb, N., & Richey, C. A. (1980). Education of human service practitioners for clinical evaluation. In R. W. Weinbach & A. Rubin (Eds.), Teaching social work research: Alternative programs and strategies (pp. 3–12). New York: Council on Social Work Education.
Heineman, M. (1981). The obsolete scientific imperative in social work research. Social Service Review, 55, 371–396.
Hersen, D. H., & Barlow, M. (1976). Single case experimental designs. New York: Pergamon Press.
Ivanoff, A., Blythe, B. J., & Tripodi, T. T. (1994). Involuntary clients in social work practice: A research-based approach. New York: Aldine de Gruyter.
Kagle, J. D. (1982). Using single-subject measures in practice decisions: Systematic documentation or distortion? Arete, 7, 1–9.
Kazdin, A. (1982). Single-case research designs: Methods for clinical and applied settings. New York: Oxford University Press.
Mattaini, M. A. (1993). More than a thousand words: Graphics for clinical practice. Washington, DC: NASW Press.
Mills, H. L., Agras, S., Barlow, D. H., & Mills, J. R. (1973). Compulsive rituals treated by response prevention. Archives of General Psychiatry, 28, 524–529.
Millstein, K. H. (1993). Building knowledge from the study of cases: A reflective model for practitioner self-evaluation. Journal of Teaching in Social Work, 8, 255–279.
Mutschler, E., & Jayaratne, S. (1993). Integration of information technology and single-system designs. Journal of Social Services Research, 18, 121–145.
Newman, E., & Turem, J. (1974). The crisis of accountability. Social Work, 19, 5–16.
Nurius, P. S., & Hudson, W. W. (1993). Human services practice, evaluation, and computers: A practical guide for today and beyond. Pacific Grove, CA: Brooks/Cole.
O'Brien, F., & Azrin, N. H. (1973). Interaction-priming: A method of reinstating patient-family relationships. Behaviour Research & Therapy, 11, 133–136.
Proctor, E. K. (1990). Evaluating clinical practice: Issues of purpose and design. Social Work Research & Abstracts, 26(1), 32–40.
Rodwell, M. K. (1990, March). Naturalistic inquiry: The research link to social work practice? Paper presented at the 36th Annual Program Meeting of the Council on Social Work Education, Reno, NV.
Rosen, A. (1992). Facilitating clinical decision-making and evaluation. Families in Society, 73, 522–530.
Ruckdeschel, R. A., & Farris, B. E. (1981). Assessing practice: A critical look at single-case design. Social Casework, 62, 413–419.
Thomas, E. J. (1978). Research and service in single-case experimentation: Conflicts and choices. Social Work Research & Abstracts, 14(4), 20–31.
Witkin, S. L. (1992). Should undergraduate and graduate social work students be taught to conduct empirically-based practice? Journal of Social Work Education, 28, 265–268.
Alter, C., & Evans, W. (1990). Evaluating your practice: A guide to self-assessment. New York: Springer.
Bloom, M., Fischer, J., & Orme, J. (1995). Evaluating practice: Guidelines for the accountable professional (2nd ed.). Needham Heights, MA: Allyn & Bacon.
Fischer, J., & Corcoran, K. (1994). Measures for clinical practice. New York: Free Press.
Betty J. Blythe, PhD, is professor, Boston College, Graduate School of Social Work, Chestnut Hill, MA 02167.
For further information see
Clinical Social Work; Direct Practice Overview; Experimental and Quasi-Experimental Design; Goal Setting and Intervention Planning; Person-in-Environment; Recording; Research Overview; Social Work Education; Social Work Practice: Theoretical Base.
social work practice
social work research