Intervention Research

Jack Rothman

Intervention research entails the empirical study of professional intervention behavior in the human services. It may involve acquiring knowledge about the process and context of intervention, or it may focus on creating or enhancing the fundamental methods and tools of intervention. It is an emergent area of research that is not as highly developed as more-established methodologies. Rothman and Thomas's (1994) book, Intervention Research, which drew together the writings and experiences of the major researchers in the field, is used as a primary resource for this entry.

FACETS OF INTERVENTION RESEARCH

There are three facets of intervention research: knowledge development, knowledge utilization, and design and development (Thomas & Rothman, 1994). Knowledge development overlaps with social and behavioral research in general and applied research in particular. Its purpose, however, is more specialized or distinct in that it seeks to acquire knowledge that is practical and instrumental and closely related to the problems of intervention. It can provide concepts and theories that contribute to design and development, which is at the heart of intervention research. These concepts and theories may include the characteristics of specific clients, the practices of certain types of agencies, or behavioral patterns in individual families with troubled children. Gathering such information can be highly useful later, when the social worker is shaping strategies of intervention. Periodicals such as the Journal of Social Service Research and Research on Social Work Practice publish numerous studies of this nature.

Knowledge utilization involves the purposive transformation of theory and data from social science research into a form that has strong implications for intervention, both for practice and policy. The journal Knowledge: Creation, Diffusion, Utilization reflects this approach and has been a forum for those who are engaged in applying social science knowledge to practical ends. Social workers have been actively involved in this area (Grasso & Epstein, 1992), as have professionals from a wide range of other disciplines (see Glaser, Abelson, & Garrison, 1983).

Design and development is geared specifically to the production of intervention technology. In using design and development, researchers seek and use a systematic methodology for constructing effective operational tools of practice. Because of its crucial place in bringing into being new practice technology, design and development is the focus of the discussion presented here.

HISTORICAL CURRENTS

According to Fawcett et al. (1994) five interrelated research streams have fed into design and development. These streams include the experimental social innovation paradigm of Fairweather (1967), which uses quasi-experimental designs to evaluate the effects of social programs; social research and development approaches (Rothman, 1980), which draw on the developmental engineering model of the physical sciences; developmental research (Thomas, 1984), which uses applied research and empirically based practice models; model development research (Paine, Bellamy, & Wilcox, 1984), which emphasizes the movement from innovation to standard practice; and behavioral community research (Fawcett, 1990), which is based on behavioral analysis and community psychology. These largely independent but synergistic strands of activity have been consolidated in an integrated paradigm (Thomas & Rothman, 1994). A description of the structure of this paradigm provides a useful overview of design and development, including its major components and the scholars who have contributed directly and indirectly to them.

PHASES OF DESIGN AND DEVELOPMENT

The integrated formulation comprises six phases that are sequential and cumulative, one building on the other. The phases and some key activities in each area are as follows:

Problem analysis and project planning: identifying and clarifying a human problem of concern and potential interventive approaches for dealing with it; determining the feasibility of the project in terms of technical, human, and social considerations; and preparing an initial project plan that considers staffing, funding, study methods and phases, and so forth (Alkin, 1985; Fairweather & Tornatzky, 1977; Fawcett, 1991; Haug, 1971; Sayles & Chandler, 1971; Seekins & Fawcett, 1987; Twain, 1975).

Information gathering and synthesis: identifying existing relevant information (empirical research, practice experience, innovative programs, and the like), delineating sources of information and devising retrieval procedures, gathering original data to supplement and specify existing information, and synthesizing the overall information base (Cook & Leviton, 1980; Cooper, 1984; Fischer, 1990; Jackson, 1978; Light & Pillemer, 1982; Nurius & Yeaton, 1987; Rothman, 1974; Slavin, 1986).

Design: determining who will participate in the design process, using the acquired information base and disciplined creativity to shape potential interventive solutions, selecting a site for pilot development work, and choosing a high-potential solution strategy or model for intervention and procedures (Havelock, 1969; Mullen, 1978, 1988; Rothman, 1978; Thomas, 1978a; Weiss, 1977; Zaltman, 1979).

Early development and pilot testing: devising a plan for the trial use of the selected intervention in the pilot test site; formulating appropriate procedures and instruments; conducting the pilot test; and, on the basis of field-generated data, refining the intervention design and retesting it. This latter activity may require several test–refine–retest cycles (Jain, 1989; Mocniak & Hegarty, 1989; Riecken & Boruch, 1974; Rosen & Mutschler, 1982; Rossi & Freeman, 1989; Whang, Fletcher, & Fawcett, 1982).

Evaluation and advanced development: establishing a plan for more extensive and controlled field testing or the intervention (using a broad sample of clients, types of agencies, staff competencies, and so on), selecting an evaluation approach and appropriate methods and procedures and pilot testing the evaluation plan, conducting a systematic evaluation of the intervention in its developed form, and making final refinements on the intervention in preparation for its use in standard practice (Billingsley, White, & Munson, 1980; Fairweather, 1967; Gottman & Markman, 1978; McMahon, 1987; Reid, 1983; Thomas, 1985; Tripodi, 1983).

Dissemination: determining potential users of the intervention, assessing their needs and interests, and delineating channels for communicating with them; preparing and packaging the intervention in user-ready form; developing a dissemination plan that includes the media and the message to be used; pilot testing the plan in a "test market"; revising it as appropriate; and promoting its wide-scale diffusion to intended users and beneficiaries (Backer, Liberman, & Kuehnel, 1986; Balcazar, Seekins, Fawcett, & Hopkins, 1989; Emerson & Emerson, 1987; Fisher, 1983; Greenblatt, 1983; Perry & Furukawa, 1986; Rice & Rogers, 1980; Rogers & Shoemaker, 1971; Rothman, Teresa, Kay, & Morningstar, 1983).

Although the phases are linear and progressive, they overlap to some degree and have feedback loops (Fawcett et al., 1994). For example, in the design phase, one must take into account the sites to be used in pilot and development work, and in the pilot-testing phase, one shapes and reshapes the design to fit real-world circumstances. At any phase, data and experience can have implications for one or more previous phases. For example, the experience gained in development can dictate that further information gathering is necessary to reformulate the design.

A PROBLEM-SOLVING PROCESS

Design and development constitutes a comprehensive, long-term, and complex framework for constructing effective human services interventions. But the formulation can also be in a more segmented or selective way simply to design an intervention or to diffuse an innovative practice. According to Rooney (1994), the full-scale format is appropriate

when the model developer is highly skilled, and has adequate resources, institutional support and time. There is room, however, for other [researchers] who use the paradigm more selectively to work on particular problems at particular times, while conducting other research using other methodologies. (p. 356)

This flexibility is especially beneficial for researchers who are not intensively involved in intervention research.

Outputs

Furthermore, design and development produces two interlocked outputs—a practical device, to be used in interventive practice, and research findings related to the practice phenomenon (Reid, 1987). Although the distinctive feature of design and development is the creation of the procedures of reliable practice, research procedures are used in each phase, and systematic data are acquired. Potential social and behavioral information continuously emerges. A critical question in the field has been, "What should be the relative balance ... between research and evaluation on the one hand and development on the other?" (Borg, 1969, p. 5). When the work is carried out in universities, it is essential for the careers of researchers that milestone academic products materialize along the various segments of the continuum, and it is feasible for them to do so when the project's plan allots time for this work.

Different Research Technology

Design and development is a new research paradigm, but not one that involves a different research technology. The paradigm is complex and multifaceted, requiring the use of a series of interconnected research tasks and methods, most of which are generally used by researchers (Thomas & Rothman, 1994). The early phases may draw on methods of needs assessment and meta-analysis. The middle phases rely on field-experiment procedures and evaluation research. The final stage entails the methods of diffusion studies and market research. Projects may require the use of survey methods, case studies, model-development procedures, operations research, or quasi-experimental methods. The panoply of standard techniques involving the construction of questionnaires, interviewing, observation, and data analysis are all grist for the intervention researcher's mill. The distinctiveness of the methodology of design and development is the sequential joining together of these diverse research techniques to form a holistic pattern that is unique for each project. It is the aim of the activity and the manner in which research tools are configured that gives design and development its particular character.

It is appropriate to conceive of design and development as a problem-solving process, whose aim is to seek an interventive solution to some type of personal or social deficiency or defect (Andreasen & Hein, 1987). But the problem-solving procedures are consistently and firmly embedded in research techniques and perspectives. Such research-rooted problem solving entails a slower, more deliberative rational and controlled approach than is typical or even possible in the practice environment where operational problems arise. This programmatic, mission-oriented perspective makes design and development highly relevant to the world of practice.

METHODOLOGICAL DEVELOPMENTS

This section describes a few of the many conceptual and methodological developments in the field that are of relevance to this discussion. First, Thomas (1985) proposed the notion of developmental validity, which provides a basis for determining when an innovation is adequately advanced. The criteria include the extent to which the intervention meets intended objectives and the intervention's reliability in doing so. The methods of developmental testing that Thomas introduced are proceduralization, developmental logs, critical incidents, and failure analysis. Second, in addition to the critical incident approach, Reid (1994) used a strategy involving case studies and collective data. According to Reid,

An intervention is first piloted and shaped through single case studies. When a sufficient number of these studies has been accumulated, their data are aggregated and further analyzed. ... The case study/aggregation cycle can then be repeated or more rigorous designs—e.g., controlled single case or group experiments—can be conducted to provide more definitive results on the effectiveness of the intervention. (p. 245)

Third, in the area of design, Mullen (1983) formulated procedures by which practitioners can construct "personal intervention models." His method includes the fashioning of a generalization about the problem condition, specifying the limiting factors of the generalization, indicating the quality of evidence, devising a derived action guideline, and spelling out design articulations for differential application. Fourth, Thomas (1985) offered criteria for establishing design validity that provide a basis for examining the degree to which the design concept may be incomplete or incorrect.

Fifth, Fawcett (1991) and Fawcett et al. (1994) focused on developing procedures that clients themselves may use to assess their needs, determine goals, and set the "agenda" for community problem solving.

Sixth, Whittaker, Tracy, Overstreet, Mooradian, and Kapp (1994) evolved a procedure for an agency staff's intensive participation and collaboration in the design of a social-network intervention for the agency's use.

Finally, Rothman, Damron-Rodriguez, and Shenassa (1994) devised a procedure for "systematic research synthesis" to aid in gathering and assembling information. It is a form of meta-analysis that is conceptual, rather than quantitative. The approach allows for aggregating a wide range of types of studies (experimental, quasi-experimental, ethnographic, correlational, and so forth) in the research-synthesis process, and it is geared specifically to serving the purposes of design and development.

These efforts and others have been advancing the potential of intervention research in social work and related fields. Although they are encouraging, as with most scientific movements, there has been no revolutionary shift in paradigms, and progress has been slow and incremental.

ORGANIZATIONAL AND STAFFING FACTORS

Because of their characteristics, design and development projects require certain organizational features and staffing patterns. Intervention tools created through this process must be in a form that can be readily implemented by some designated set of ultimate users. For this reason, any project must make provision for locating an application environment, ordinarily a social agency, that is similar to the environment in which the tools will finally be used (Haug, 1971; Twain, 1975). For pilot testing and early development, a more artificial or controlled program or laboratory setting may be appropriate, but advanced development ordinarily entails implementation of the intervention design in a natural agency context to ensure that the design will be feasible and effective in the real world.

This requirement brings into play certain features that are akin to program evaluation and field experimentation, including obtaining access to agency sites, forming appropriate working relationships, gaining the cooperation of agency supervisors and staff, and generally harmonizing the inconsistent goals and operational modalities of researchers and practitioners (Rossi, 1977). The arrangement involves an interinstitutional relationship between a school and an agency that, according to Hasenfeld and Furman (1994), is best analyzed through the lens of interorganizational exchange theory. Factors that ought to be taken into account in selecting sites for development and in fostering collaborative relationships are compatibility of organizational cultures and motives for the endeavor, a favorable balance of power between the parties, and existence of appropriate linking mechanisms. Reaching a firm and early understanding and establishing contractual administrative agreements are recommended steps (Fairweather & Tornatzky, 1977).

Carrying out a design and development project usually requires a large, diverse staff (Fairweather, 1967). People with different specializations and abilities, from those who are competent in research roles to those who can communicate with and keenly understand the problems of practitioners, are needed to perform the wide variety of tasks within any phase and across phases of a project. In addition, supplementary technical or administrative staff are often involved in preparing practice manuals, supervising field operations, and the like. Ordinarily a team, rather than a single investigator, is necessary.

Staff members have to be able to collaborate with professionals on the project team and with those in field agencies. The requisite skills for this kind of work include the ability to function in complex, ambiguous situations; to be flexible; to respond quickly to requests; and to listen and communicate as well as to be sensitive to the requirements for implementing programs (Shadish, Cook, & Leviton, 1991). Professional and technical competencies are essential, but without staff with these human relations capabilities, projects may flounder or abort.

PROBLEMS AND LIMITATIONS

Rooney (1994) pointed to two important constraints on design and development: the complexity of the approach and the difficulties in gaining academic rewards for practicing it. The complexity issue relates to the breadth of activities and skills needed to conduct the research and to the extended time line required to complete an entire project. Complexity is compounded by the relative recency of the paradigm and hence its underdeveloped status. In other words, within all phases and in their interdigitation, there are areas that need to be refined.

For example, because of insufficient experience with development as a coherent methodology, it is not clear how much weight should be given to research or to the production of specific intervention tools in the process (Thomas, 1978b). In relation to evaluation, there is a conflict between holding the intervention-independent variable steady to test outcomes meticulously and the developmental task of modifying the intervention in process to maximize (rather than simply to assess) its potency. In evaluation, a tension exists in the concept of evaluator as developer (Perloff, 1979), whereas the two roles are combined in design and development. The aim of the traditional separation of evaluation and development is to protect scientific integrity. However, it has led to a gap between practice and research and to inefficiencies in producing empirically based intervention technology, a central problem that design and development is obligated to confront directly. Patti (1981) emphasized the longitudinal elements of complexity, believing that the long time line discourages and complicates the use of this framework. He advocated disaggregating the phases to bring greater simplicity and increased feasibility to the work.

The matter of academic status is more practical, having to do with recognition, promotion, and tenure in universities. Because intervention research and design and development are mission oriented, and universities strongly emphasize basic science, it is generally believed that those who engage in this research will not acquire strong professional credentials for advancement. The kinds of manuals and practice aids that design and development can generate will be highly valued by professionals in agencies, but not much appreciated by colleagues in academia. The need to find ways to balance the research and product-development components of design and development is highlighted by this lack of professional appreciation.

Two other issues are the ability to gain faculty support for doctoral students who wish to engage in model development research for their dissertations (Reid, 1979) and to obtain funding from government agencies and foundations for this type of research, which is not yet in the mainstream.

INTERVENTION RESEARCH IN CONTEXT

Design and development as a mode of intervention research is a distinct and, in some ways, innovative formulation. It provides a kind of road map for the serious researcher–practitioner who is committed to using the methods of social science to serve practice and policy in a direct and concrete way. Its emphasis is on research for practice, rather than research on practice. Thus, this approach holds the promise of providing a more orderly and predictable means of shaping innovative intervention tools that are both relevant and effective. It is unmatched as a form of research for social work because it uniquely serves to create the service technology that is fundamental to fulfilling the mandate placed on the profession by society.

REFERENCES

Alkin, M. C. (1985). A guide for evaluation decision makers. Beverly Hills, CA: Sage Publications.

Andreasen, M., & Hein, L. (1987). Integrated product development. Berlin: Springer-Verlag.

Backer, T. E., Liberman, R. P., & Kuehnel, T. G. (1986). Dissemination and adoption of innovative psychosocial interventions. Journal of Consulting and Clinical Psychology, 54, 111–118.

Balcazar, F. E., Seekins, T., Fawcett, S. B., & Hopkins, B. L. (1989). Empowering people with physical disabilities through advocacy skills training. American Journal of Community Psychology, 18, 281–296.Billingsley, F., White, O. R., & Munson, R. (1980). Procedural reliability: A rationale and an example. Behavioral Assessment, 3, 229–243.

Borg, W. (1969). The balance between educational research and development: A question of strategy. Educational Technology, 5(7), 5–11.

Cook, T. D., & Leviton, L. C. (1980). Reviewing the literature: A comparison of traditional methods with meta-analysis. Journal of Personality, 48, 449–472.

Cooper, H. M. (1984). The integrative research review: A systematic approach. Beverly Hills, CA: Sage Publications.

Emerson, E., & Emerson, C. (1987). Barriers to the effective implementation of habilitative behavioral programs in an institutional setting. Mental Retardation, 25, 101–106.

Fairweather, G. W. (1967). Methods for experimental social innovation. New York: John Wiley & Sons.

Fairweather, G. W., & Tornatzky, L. G. (1977). Experimental methods for social policy research. Oxford, England: Pergamon Press.

Fawcett, S. B. (1990). Some emerging standards for community research and action. In P. Tolan, D. Keys, F. Chertok, & L. E. Jason (Eds.), Researching community psychology: Integrating theories and methodologies (pp. 64–75). Washington, DC: American Psychological Association.

Fawcett, S. B. (1991). Some values guiding community research and action. Journal of Applied Behavior Analysis, 10, 739–746.

Fawcett, S. B., Suarez-Balcazar, Y., Balcazar, F. E., White, G. W., Paire, A. L., Blanchard, K. A., & Embree, M. G. (1994). Conducting intervention research—The design and development process. In J. Rothman & E. Thomas (Eds.), Intervention research: Design and development for human service (pp. 25–49). Binghamton, NY: Haworth Press.

Fischer, J. (1990). Problems and issues in meta-analysis. In L. Videka-Sherman & W. J. Reid (Eds.), Advances in clinical social work research (pp. 297–325). Silver Spring, MD: NASW Press.

Fisher, D. (1983). The going gets tough when we descend from the ivory tower. Analysis and Intervention in Developmental Disabilities, 3, 249–256.

Glaser, E. M., Abelson, H. H., & Garrison, K. N. (1983). Putting knowledge to use. San Francisco: Jossey-Bass.

Gottman, J. M., & Markman, H. J. (1978). Experimental designs in psychotherapy research. In S. L. Garfield & A. E. Bergin (Eds.), Handbook of psychotherapy and behavior change (2nd ed.) New York: John Wiley & Sons.

Grasso, A., & Epstein, I. (Eds). (1992). Research utilization in the social services: Innovation for practice and administration. Binghamton, NY: Haworth Press.

Greenblatt, M. (1983). Some principles guiding institutional change. Analysis and Intervention in Developmental Disabilities, 3, 257–259.

Hasenfeld, Y., & Furman, W. (1994). Intervention research as an interorganizational exchange. In J. Rothman & E. Thomas (Eds.), Intervention research: Design and development for human service. (pp. 297–311). Binghamton, NY: Haworth Press.

Haug, M. R. (1971). Notes on the art of research management. In R. O'Toole (Ed.), The organization, management, and tactics of social research (pp. 198–210). Cambridge, MA: Schenkman.

Havelock, R. G. (1969). Planning for innovation through dissemination and utilization of knowledge. Ann Arbor: University of Michigan, Institute for Social Research, Center for Research on Utilization of Scientific Knowledge.

Jackson, G. B. (1978). Methods for reviewing and integrating research in the social sciences. Washington, DC: National Science Foundation.

Jain, P. S. (1989). Monitoring of rural development programmes. Evaluation and Program Planning, 12, 171–177.

Light, R. J., & Pillemer, D. B. (1982). Numbers and narrative: Combining their strengths in research review. Harvard Educational Review, 52, 1–26.

McMahon, P. M. (1987). Shifts in intervention procedures: A problem in evaluating human service interventions. Social Work Research & Abstracts, 23(4) 13–18.

Mocniak, N. L., & Hegarty, T. W. (1989). Evaluating a pilot program and designing it, too. Evaluation and Program Planning, 12, 291–293.

Mullen, E. J. (1978). The construction of personal models for effective practice: A method for utilizing research findings to guide social interventions. Journal of Social Service Research, 2(1), 45–63.

Mullen, E. J. (1983). Personal practice models in clinical social work. In A. Rosenblatt & D. Waldfogel (Eds.), Handbook of clinical social work. San Francisco: Jossey-Bass.

Mullen, E. J. (1988). Using research and theory in social work practice. In R. M. Grinnell, Jr. (Ed.), Social work research and evaluation. Itasca, IL: F. E. Peacock.

Nurius, P. S., & Yeaton, W. H., (1987). Research synthesis reviews: An illustrated critique of "hidden" judgments, choices and compromises. Clinical Psychology Review, 7, 695–714.

Paine, S. C., Bellamy, G. T., & Wilcox, B. (1984). Human services that work: From innovation to standard practice. Baltimore: Paul H. Brookes.

Patti, R. J. (1981). The prospects for social R & D: An essay review. Social Work Research & Abstracts, 17(2), 38–44.

Perloff, R. (Ed.). (1979). Evaluator interventions: Pros and cons. Beverly Hills, CA: Sage Publications.

Perry, M. A., & Furukawa, M. J. (1986). Modeling methods. In F. H. Kanfer & A. P. Goldstein (Eds.), Helping people change (3rd ed.) New York: Pergamon Press.

Reid, W. J. (1983). Developing intervention methods through experimental designs. In A. Rosenblatt & D. Waldfogel (Eds.), Handbook of clinical social work (pp. 650–673). San Francisco: Jossey-Bass.

Reid, W. J. (1987). Research in social work. In A. Minahan (Ed.-in-Chief), Encyclopedia of social work (18th ed., Vol. 2, pp. 474–487). Silver Spring, MD: National Association of Social Workers.

Reid, W. J. (1979). The model development dissertation. Journal of Social Service Research, 3, 215–225.

Reid, W. J. (1994). Field testing and data gathering of innovative practice interventions in early development. In J. Rothman & E. Thomas (Eds.), Intervention research: Design and development for human service (pp. 245–261). Binghamton, NY: Haworth Press.

Rice, R. E., & Rogers, E. H. (1980). Reinvention in the innovation process. Knowledge: Creation, Diffusion, and Utilization, 1, 499–513.

Riecken, H. W., & Boruch, R. F. (1974). Social experimentation: A method for planning and evaluating social intervention. New York: Academic Press.

Rogers, E. M., & Shoemaker, F. F. (1971). Communication of innovations: A cross-cultural approach (2nd ed.). New York: Free Press.

Rooney, R. (1994). Disseminating intervention research in academic settings—A view from social work. In J. Rothman & E. Thomas (Eds.), Intervention research: Design and development for human service (pp. 353–369). Binghamton, NY: Haworth Press.

Rosen, A., & Mutschler, E. (1982). Correspondence between the planned and subsequent use of interventions in treatment. Social Work Research & Abstracts, 18(2), 28–34.

Rossi, P. H. (1977). Boobytraps and pitfalls in evaluation of social action programs. In F. G. Caro (Ed.), Readings in evaluation research (pp. 239–248). New York: Russell Sage Foundation.

Rossi, P. H., & Freeman, H. E. (1989). Evaluation: A systematic approach. Newbury Park, CA: Sage Publications.

Rothman, J. (1974). Planning and organizing for social change. New York: Columbia University Press.

Rothman, J. (1978). Conversion and design in the research utilization process. Journal of Social Service Research, 2, 117–131.

Rothman, J. (1980). Social R & D: Research and development in the human services. Englewood Cliffs, NJ: Prentice Hall.

Rothman, J., Damron-Rodriguez, J., & Shenassa, E. (1994). Systematic research synthesis—Conceptual integration methods of meta-analysis. In J. Rothman & E. Thomas (Eds.), Intervention research: Design and development for human service (pp. 133–154). Binghamton, NY: Haworth Press.

Rothman, J., Teresa, J., Kay, T., & Morningstar, G. C. (1983). Marketing human service innovations. Beverly Hills, CA: Sage Publications.

Rothman, J., & Thomas E. (Eds.). (1994). Intervention research: Design and development for human services. Binghamton, NY: Haworth Press.

Sayles, L. R., & Chandler, M. K. (1971). Managing large systems. New York: Harper & Row.

Seekins, T., & Fawcett, S. B. (1987). Effects of a poverty client's agenda on resource allocations by community decisionmakers. American Journal of Community Psychology, 15, 305–320.

Shadish, W. R., Jr., Cook, T. D., & Leviton, L. C. (1991). Foundations of program evaluation. Newbury Park, CA: Sage Publications.

Slavin, R. E. (1986). Best-evidence synthesis: An alternative to meta-analytic and traditional reviews. Educational Researcher, 15, 5–11.

Thomas, E. J. (1978a). Generating innovation in social work: The paradigm of developmental research. Journal of Social Service Research, 2, 95–115.

Thomas, E. J. (1978b). Research and service in single-case experimentation: Conflicts and choices. Social Work Research & Abstracts, 14(4), 20–31.

Thomas, E. J. (1984). Designing interventions for the helping profession. Beverly Hills, CA: Sage Publications.

Thomas, E. J. (1985). Design and development validity and related concepts in developmental research. Social Work Research & Abstracts, 21(2), 50–55.

Thomas, E. J., & Rothman, J. (1994). An integrative perspective on intervention research: In J. Rothman & E. Thomas (Eds.), Intervention research: Design and development for human service (pp. 3–20). Binghamton, NY: Haworth Press.

Tripodi, T. (1983). Evaluating research for social workers. Englewood Cliffs, NJ: Prentice Hall.

Twain, D. (1975). Developing and implementing a research strategy. In E. L. Streuning and M. Guttentag (Eds.), Handbook of evaluation research (Vol. 1, pp. 27–52). Beverly Hills, CA: Sage Publications.

Weiss, C. H. (1977). Using social research in public policy making. Lexington, MA: Lexington Books.

Whang, P. L., Fletcher, R. K., & Fawcett, S. B. (1982). Training counseling skills: An experimental analysis and social validation. Journal of Applied Behavior Analysis, 15, 325–334.

Whittaker, J., Tracy, E. M., Overstreet, E., Mooradian, J., & Kapp, S. (1994). Intervention design for practice-enhancing social supports for high risk youth and families. In J. Rothman & E. Thomas (Eds.), Intervention research: Design and development for human service (pp. 195–208). Binghamton, NY: Haworth Press.

Zaltman, G. (1979). Knowledge utilization as planned social change. Knowledge: Creation, Diffusion, and Utilization, 1, 82–105.

Jack Rothman, PhD, is professor, University of California at Los Angeles, School of Social Work, 405 Hilgard Avenue, Los Angeles, CA 90024.

For further information see

Ecological Perspective; Epistemology; Experimental and Quasi-Experimental Design; Goal Setting and Intervention Planning; Meta-analysis; Policy Analysis; Program Evaluation; Research Overview; Social Work Practice: Theoretical Base.

Key Words

developmental research, evaluation, intervention analysis, program design