Ethical Issues in Research
David F. Gillespie

Ethics emerge from value conflicts. In research, these conflicts are expressed in many ways: individuals' rights to privacy versus the undesirability of manipulation, openness and replication versus confidentiality, future welfare versus immediate relief, and others. Each decision made in research involves a potential compromise of one value for another. Researchers must try to minimize risks to participants, colleagues, and society while attempting to maximize the quality of information they produce.

Research ethics are codes or guidelines that help reconcile value conflicts. Although ethical codes provide direction, the decisions made in research must be reached by considering the specific alternatives available. The choices made in each case weigh the potential contribution of the research against the potential risks to the participants. Weighing these alternatives is essentially subjective, entails matters of degree rather than kind, and involves a comparison between the experiences required in the research and those expected in everyday life.

Ethical codes in research stipulate areas of responsibility to participants (subjects, clients, respondents), to colleagues and professional associations, and to sponsoring agencies, the public at large, or society. More discussion has been devoted to the ethics involving participants than to the ethics involving the other groups. The amount of attention given to participants derives historically from biomedical experimentation in which subjects have been exposed to serious risks and irremediable harm. The emphasis on participants in biomedical ethical codes and government regulations has been extended to all types of research. This situation has led to misunderstanding and debate because it blurs the distinctions between biomedical experimentation on human subjects and social science.

Social work researchers and other social scientists, partly because of developments in the medical sciences, have begun to address the particular ethical issues that arise in their work. These issues are summarized in Table 1, which provides an overview of the risks encountered in social research and the steps taken to reduce them. Debates have taken place regarding particular studies that illustrate alternative resolutions of moral dilemmas. Key studies such as Milgram's (1963) research on obedience to authority, Humphreys's (1970) research on homosexuals, and the Project Camelot study aimed at preventing revolutions (Horowitz, 1967) have helped clarify the issues and suggest resolutions.

RISKS IN RESEARCH

There are three areas of risks in social research. First, participants may be harmed as a result of their involvement. The potential harms include death or injury, stress, guilt, reduction in self-respect or self-esteem, unfair treatment, withheld benefits, and minor discomfort. Second, professional relationships and the knowledge base may be damaged. These risks include falsification of data, plagiarism, abuse of confidentiality, and deliberate violation of regulations.  Third, problems for the community or society may result.  Societal risks involve the effect of cultural values and beliefs on the knowledge produced and the impact of that knowledge on society. In the following sections, ethical issues are discussed for these three areas of risk.

Potential Harm to Participants
Although documented cases of death in social research are extremely rare, they do exist (Appell, 1974; Warwick, 1982). Ethical dilemmas may be experienced over the use of placebo therapy when it appears that untreated patients with acquired immune deficiency syndrome (AIDS) will die (Volberding & Abrams, 1985). In the case of terminally ill patients, the treatment may be perceived as their last hope (Vanderpool & Weiss, 1987). Cases of physical abuse or injury are also rare;  however, Zimbardo, Haney, Banks, and Jaffe's (1973) simulation of a prison environment was prematurely terminated because volunteer prisoners suffered physical and psychological abuse from participants role-playing as guards.

Psychological harm is more frequently reported. In a study of obedience to authority, Milgram (1963) reported that “subjects were observed to sweat, tremble, stutter, bite their lips, groan, and dig their fingernails into their flesh. These were characteristic rather than exceptional responses to the experiment” (p. 375). Anxiety, depression, and other emotional disturbances also have been reported (Milgram, 1974). Subjects in Humphreys's study of male homosexual activity in public rest rooms revealed apprehensions about being identified.

Another kind of psychological harm involves feelings of guilt. Studies with the potential to create guilt include those that have participants administer electric shocks to fellow participants, beat prisoners while role-playing as guards, fail to help those in obvious need, steal money from a charity box, or participate in illegal acts brought on by experimental entrapment. Murray's (1980) study of helping behavior reported that “virtually every subject who had not responded showed some anxiety to the experimental condition of watching the experimenter drop to the floor after receiving an apparently severe electrical shock” (p. 12).

Safeguarding confidentiality becomes especially difficult in research on people with AIDS (Weitz, 1987) both because of the public fear of the disease and because of discrimination against high-risk populations (Mayer, 1985; Volberding & Abrams, 1985). Fears about relinquishing confidentiality may prevent individuals from participating (Mayer, 1985). Confidentiality is particularly sensitive when the risk to the individual is weighed against the risk to others and to public health (Volberding & Abrams, 1985). Researchers may experience ethical dilemmas in considering their obligation to protect the identity of participants while also considering the implications of participants' human immunodeficiency virus (HIV) status for potential sexual partners or even for members of the research team who may be at some risk of infection. Many claim that prejudice against high-risk groups such as homosexuals and intravenous drug users has slowed progress and reduced needed funding in research on AIDS (Panem, 1985).

Social research has the potential to reduce self-esteem or self-respect. Walster (1965) used a dating ruse to manipulate female participants' feelings of self-worth by giving them a personality test with fixed results. Although the women were debriefed after the study, Diener and Crandall (1978) suggested that participants who received the negative personality reports probably felt bad, and those who received the positive reports may have been angry or embarrassed at having been fooled and perhaps disappointed that the report was not genuine. The dating ruse may have been even more harmful because dating anxiety and low self-esteem due to a lack of dates are frequently mentioned as problems among college students (Jaremko, 1984).

Relationships with others may become damaged through research. Experimental team-building efforts in organizations can disrupt subsequent relationships on the job. When superiors, peers, and subordinates openly exchange feelings and opinions, resentments may linger. Similarly, individuals who volunteer in sex research may find their morality questioned. People serving as informants in studies that become controversial may be subject to sanctions and  even ostracized.
Participants in research may suffer career liabilities and other kinds of economic harm. Economic harm occurs when participants earn less money or pay more for things as a result of their involvement. Some participants in public policy experiments may receive less public aid than others, or they may receive unemployment compensation, whereas others do not (Nagel, 1990).

Legal sanctions represent another kind of potential harm. It is not unusual for information about illegal behavior to be collected in research. Except for special legal protection from subpoena, which can be obtained in certain cases by petition, social research does not have legal protection and, because leaks can occur despite safeguards, there is a real potential for harm to some participants. Nagel (1990) conducted a study involving juvenile delinquents who were randomly assigned either to one year of probation or to a one-year jail term. The results showed that the probation group had better subsequent records. It can be argued that those assigned to the jail term condition were harmed even if useful scientific knowledge was gained.

Feelings of irritation, frustration, or discomfort represent minor psychological harm. Respondents can become irritated when an interviewer is abrasive. Frustration or annoyance may result when questionnaires include too many questions or irrelevant questions. Experiments sometimes involve minor physical discomfort, and subjects may be treated in a demeaning or mildly insulting manner.

Potential Damage to Professional Relationships and the Knowledge Base
Damage to professional relationships occurs when standards of professional behavior are violated. Violations include falsification of data, plagiarism, abuse of confidentiality, and deliberate violation of regulations.  Knowledge development depends on accurate and careful data collection, thorough analyses, and unbiased reports.  Falsification of data ranges from total fabrication to selective reporting and can be directly harmful to individuals in clinical research.

Sir Cyril Burt claimed to have measured the IQs of more than 50 sets of identical twins separated early in life. A high correlation between the IQs of separated twins suggested a genetic basis for intelligence. After Burt's death, evidence revealed faked data. Coauthors were fictional characters, data were supplied for some of the twins who were never tested, and one set of results was used for more than one publication so that results showed perfect consistency. Eysenck (1977) and Jensen (1977) offered some defense for Burt.

There is evidence that at least mild levels of falsification occur quite often. Azrin, Holz, Ulrich, and Goldiamond (1961) reported that 15 of 16 graduate students indicated successful completion of a study that was impossible to complete. Sometimes good intentions simply become misguided by boredom or the tedious nature of observation. Hyman, Cobb, Feldman, Hart, and Stember (1954) reported a study in which one-fourth of the interviewers fabricated most of the data they collected and all of the interviewers “fudged” some of the data. Smith, Wheeler, and Diener (1975) noted that research assistants view the alteration of data as similar to cheating on classroom tests, a fairly common practice. In laboratory studies and field surveys, researchers can distort findings by the way they give instructions or ask questions. By emphasizing particular words or through body language, an interviewer may bias the information collected.

Research may be carried out with complete objectivity only to have the findings misreported. Findings may be “adjusted” to fit expectations. Several studies may be completed, with only the one supporting a preferred theory being published. McNemar (1960) noted that findings are sometimes discarded as “bad data” when they fail to support hypotheses. A widespread practice is the calculation of numerous statistical tests with only those achieving significance being reported. Wolins (1962) requested original data from 37 studies, and researchers in 21 of these studies replied that the data had been misplaced or destroyed. Of the seven published studies reanalyzed by Wolins, three revealed errors large enough to alter the conclusions drawn from the original data analysis.

Plagiarism is rare because it is usually apparent in the more actively pursued areas of research.  Inadequacy of citation and registering of credit to the work of others is a more pervasive problem. The pressure on academics to publish has resulted in some demanding that their names be included on manuscripts written by their graduate students or omitting the names of coworking graduate students from authorship. Fields (1984) reported that the Ethics Committee of the American Psychological Association has received an increasing number of complaints and inquiries from students who believe that “they are being exploited in the process of grinding out publications for academic psychologists” (p. 7).

Taking advantage of privileged information or violating confidentiality is a problem that is difficult to detect. Researchers openly discuss their ideas and share them through proposals. Proposals are reviewed by colleagues, university committees and administrators, and any number of outside researchers and practitioners serving on review panels. Articles prepared for professional journals are also subjected to a review process before publication. There are many opportunities for abuse of confidentiality.

Potential for Societal Harm
The research designs and procedures used, topics studied, and results produced can promote harm as well as provide benefit to society. It is curious that researchers list the potential benefits of their study to society while limiting their attention to individual participants when considering the potential harms. The use of deception in research undermines trust by undercutting the expectation that what another person says will be true. Similarly, the willingness of one person to help another in times of need or distress is compromised by studies that use deception to learn about helping behavior.

The ethical balance of knowledge gained versus the potential for social harm from deception has only recently shifted in favor of greater restrictions in the use of deception. Researchers have conducted studies in which stooges fall and release fake blood to test people's reactions to medical emergencies (Piliavin & Piliavin, 1972). Gay (1973) wrote that a number of students did nothing when they saw a student shoot someone because they thought it was a psychology experiment. Reviews indicate that between 19 percent and 44 percent of the research published in psychology includes direct lying to participants (Menges, 1973).

Social research can raise questions about the legitimacy of institutions. Vaughan (1967) reported on the federal hearings of the Wichita jury study in which a senator asked one of the researchers, “Now, do you not realize that to snoop on a jury, and record what they say, does violence to every reason for which we have secret deliberations of a jury?” Similar issues have been raised about the harmful results of biased public opinion polls on the legitimacy of political elections. These biased polls are designed to provide support for a particular candidate or position. The net effect may undermine the legitimacy of the election process and the trustworthiness of the people running for office.

Research may harm special populations such as religious groups, racial or ethnic minorities, and individuals considered deviant. A focus on a category of people as “deviant” or “disadvantaged” highlights the problems of these people. Data may create or reinforce the conditions of victimization or scapegoating. Moynihan (1965) wrote that the Negro family suffered from instability, a propensity to produce illegitimate children, and a matriarchal structure that resulted in harmful effects on children, especially boys. Ryan (1967) disagreed with Moynihan's interpretation, stating that “it draws dangerously inexact conclusions from weak and insufficient data, encourages a new form of subtle racism that might be termed `savage discovery,' and seduces the reader into believing that it is not racism and discrimination, but the weaknesses and defects of the Negro himself that accounts for the present status of inequality between Negro and white” (p. 463).

Turnbull, Blue-Banning, Behr, and Kerns (1986) reported that families with developmentally disabled children were insulted when they learned of researchers' claims that they were dysfunctional. A group of families organized to protest that researchers neglected to report on positive characteristics of families with developmentally disabled children. They also complained that research was intended to advance researchers, that it was not made available to the families who should be the ones to benefit from it, and that it was not helpful to them. The authors recommended that researchers form partnerships with families being studied and with the organizations that represent them (for example, the Association for Retarded Citizens).

The assumptions underlying interpretations and conclusions drawn in research become fastened to concrete descriptions about who is responsible for existing conditions. The creation of stereotyped images is easy to find in research. Catholics have been portrayed as less hard working than Protestants, as being at a disadvantage because of conformity to an authoritarian church, and as having too many children, which makes it difficult to provide their children with the higher education necessary to compete effectively in contemporary society (Lenski, 1963). Groups may protest a stereotype, as did Lopreato (1967) and others in regard to the image of Italians constructed by Banfield (1958). The weaker the group, however, the less likely they are to protest or even be aware that there is something to protest.

Research may provide information to the more powerful groups in a society that helps maintain their advantage.  Project Camelot, a project designed to prevent revolutions, was criticized because it was believed that it would have undermined the political sovereignty of the countries studied. Some believed that the data produced would be fed to the Central Intelligence Agency to control certain affairs in the countries involved. Others drew less pernicious scenarios but still believed that the information would favor the United States in its relationship with these countries. It is important to note that Project Camelot was canceled because of its politically threatening nature rather than as a result of a lack of scientific merit, which provides another illustration of how research may be used at the discretion of those in power.

Rushton's (1990) controversial research in the field of sociobiology has been used to support racist ideology and has led to policies that blame the victim and suggest that society need not take action to overcome the effects of historical discrimination. Fairchild (1991) argued that Rushton's research is based on biased theoretical assumptions and flawed empirical interpretations. Of course, it is equally important to note that research has benefited the poor as well (Lampman, 1985; Macdonald & Sawhill, 1978; Wolf, 1984).

MINIMIZING RISK

Individual researchers, professional associations, and the government have taken steps to minimize the risks encountered in research. Most efforts focus on protecting participants. The ratio of risks to benefits is assessed, informed consent is required, research designs and procedures are built to minimize the potential for harm to occur, participants are screened, diagnostic studies are conducted, procedures are designed to assess and deal with potential harms, and proposals are reviewed by others. Each precaution minimizes the risk, and, when taken together, the prospects are good for reducing harm to an occasional occurrence.

Participant Protections
Risk–benefit assessment.
Risk–benefit assessment. Constructing a risk–benefit ratio for a proposed study is one way to assess its ethical soundness. If the expected benefits exceed the expected risks, the study is presumed ethical. The risk–benefit precaution is a modern version of the end justifying the means. It has its most direct application when those exposed to the risks also receive the benefits. The ratio is more difficult to justify when the participants are subjected to potential harm and the benefits are directed to other individuals or to society.

Risk–benefit ratios have at least four shortcomings. First, it is impossible to predict the benefits and risks of a particular study before its execution. Studies are conducted because the outcome is unknown. Second, the possible risks and benefits are relative, subjectively assessed, and difficult to measure. Third, calculating the ratio is complicated when society benefits at the risk of individual participants. When participants receive no direct benefits, it is imperative that they fully understand and accept whatever risks may exist. Finally, there is a conflict of interest in that the risk–benefit ratio is constructed by the researcher who undoubtedly believes that the research is worth doing.

Even though risk–benefit ratios are insufficient to determine ethical soundness, they provide a useful first step. It may be impossible for a risk–benefit ratio to justify a study, but it can provide enough information to warrant canceling a study.

Informed consent.
Informed consent. Informed consent refers to an individual's willingness to participate in a study. Individuals who provide informed consent have been made aware of the design and procedures with enough detail to exercise a rational decision to participate. The provision of informed consent also includes the knowledge that participation is voluntary and that participants can withdraw from the study at any time.

The more dangerous the study, of course, the more important it becomes to obtain informed consent. Informed consent becomes absolutely essential when participants are to be exposed to serious risks or required to suspend their individual rights, as in hypnosis research or drug use studies. Many respondents believe that they do not have a genuine choice, and they may believe that access to services will be blocked unless they participate or that they will receive better treatment if they do. Many people with AIDS, for example, do not want to jeopardize access to the few services they can receive (Weitz, 1987). Prisoners may fear that if they refuse consent they will be penalized by prison officials.

Although informed consent is often desirable and sometimes essential, it is not sufficient by itself to ethically justify a study. Even when researchers make careful efforts to obtain informed consent, participants may maintain misconceptions about the study, believing that the researcher will provide them with the best treatment available (Appelbaum, Roth, Lidz, Benson, & Winslade, 1987). Appelbaum et al. recommended supplementing the researchers' disclosures. For example, a neutral informant can review risks and benefits with participants before they provide informed consent.

Protective research design.
Protective research design. Another precaution involves constructing research designs and procedures in such a way as to achieve study goals while maximizing the protection of participants. This process involves estimating the likelihood and severity of harmful effects, the probable duration of these effects, the extent to which any harm incurred can be reversed, the probability of early detection of harm, and the relationship between potential harm from participation in the study and the risks of everyday life. Studies are more ethically justified when the probability of harm is low, when only minor harm is expected, when the harm will be short term if it does occur, when harmful effects can be reversed, when measures for early detection are installed, and when the risks are no greater than one could expect in the normal affairs of one's life.

When the achievement of study goals unavoidably entails risk, researchers can turn to natural field studies so that the research itself does not promote the risks of harm. For example, social work researchers seeking to study the effects of malnutrition can work in areas of poverty where children are experiencing malnutrition (Klein, Habicht, & Yarbrough, 1973).

Screening.
Screening. Screening provides investigators with an opportunity to select for study only those individuals who show a high tolerance for the potential risks.  Zimbardo et al. (1973) administered personality tests to volunteers and selected only those who scored within a normal range. This precaution was believed to reduce the risks involved with the role-playing requirements of the study. Mayer (1985) and Weitz (1987) recommended providing information, counseling, and support services to participants in research on AIDS. Support services are especially important for individuals who obtain information about their HIV status as a result of participating in a study. In excluding individuals from a study, it is important not to convey an impression of deviance, abnormality, or unacceptability.

Pilot studies.
Pilot studies. When the potential harms are uncertain, a useful precaution involves a pilot study with follow-up diagnostic interviews to assess effects and request advice from participants. Pilot studies typically increase both the scientific rigor of the study and the protections for participants. Including procedures to detect harm and deal with it represents an important precaution in any study in which the potential for harm can be imagined. Many studies, especially experiments, routinely debrief participants to assess and deal with any negative effects. In experiments using treatments that may have possible side effects, arrangements are often made with a clinical practitioner to accept referrals.

Outside proposal review.
Outside proposal review. Requesting others to review research proposals is a helpful precaution in minimizing risks. Different people with various positions and perspectives may identify potential harms and perceive safeguards that the researcher overlooked or could not have known.  Of course, for many years proposals for funded research have been required to be reviewed by institutional review boards. Barber (1967) recommended that all research be reviewed by committees that have representatives from outside the profession.

There is little evidence to suggest long-term harm from participation in social research. Neither Milgram (1964) nor Ring, Wallston, and Corey (1970) found any long-term harmful effects from their shock–obedience studies. Zimbardo et al. (1973) also found no subsequent negative effects, even though they prematurely terminated their study because “volunteer prisoners suffered physical and psychological abuse hour after hour for days, while volunteer guards were exposed to the new self-knowledge that they enjoyed being powerful and had abused this power to make other human beings suffer” (p. 243). Clark and Word (1974), in a debriefing conducted after their study of altruism, found that one participant out of the 68 involved continued to be upset but that, after six months, no subjects reported negative aftereffects. Campbell, Sanderson, and Laverty (1964) had difficulty in reversing a conditioned fear, but the evidence leans strongly toward few, if any, long-term negative effects of carefully and ethically constructed studies.

Professional Codes
Two characteristics of professional codes help to clarify how such codes minimize risks in research. First, professional codes have been developed inductively from the research experiences of professionals.  Because the work experience of professionals offers wide variety and because reasonable people differ in the solutions they offer for particular problems, the standards reflected in ethical codes of professional associations tend to be abstract and relative to particular circumstances. Second, professional codes place strong emphasis on researchers'  responsibility for their research. These two characteristics distinguish professional codes from laws and government regulations as a means of protecting individuals and society from unnecessary harm.

The two characteristic features of professional codes—abstract relativity and researcher responsibility—are illustrated in the NASW Code of Ethics (NASW, 1994):

Scholarship and Research
—The social worker engaged in study and research should be guided by the conventions of scholarly inquiry.
1. The social worker engaged in research should consider carefully its possible consequences for human beings.
2. The social worker engaged in research should ascertain that the consent of participants in the research is voluntary and informed, without any implied deprivation or penalty for refusal to participate, and with due regard for participants' privacy and dignity.
3. The social worker engaged in research should protect participants from unwarranted physical or mental discomfort, distress, harm, danger, or  deprivation.
4. The social worker who engages in the evaluation of services or cases should discuss them only for professional purposes and only with persons directly and professionally concerned with them.
5. Information obtained about participants in research should be treated as confidential.
6. The social worker should take credit only for work actually done in connection with scholarly and research endeavors and credit contributions made by others. (p. 4)

Professional codes tend not to specify penalties for violation of principles. The major penalty involves expulsion from the association. Half of the 24 codes reviewed by Reynolds (1975) included a reference to expulsion. Interestingly, none of the codes reviewed by Reynolds specified any benefits for those who complied with the codes. The low level of penalties and rewards suggests that professional codes may not be taken too seriously. The threat of expulsion may carry little significance. It is possible, for example, to achieve a respectable career without being a member of NASW or any other professional association. Professional codes appear to exist as a necessary but not sufficient condition of professional self-regulation.

Government Regulations
Government regulations, like state and federal laws, are designed to protect or advance the interests of society and its individuals. Researchers are required to take certain precautions or are prevented from certain activities on the grounds that failure to follow the law or regulations increases the risk of harm to an individual or to society. Regulations tend to be absolute in their requirements.

Government regulations are characterized by what Fletcher (1966) has called “legalism,” the assumption that “principles, codified in rules, are not merely guidelines or maxims to illuminate the situation; they are directives to be followed” (p. 18).
Government regulations do not invite an evaluation of their moral correctness; they are written to be obeyed (Alexander, 1986). Moreover, obeying a law or regulation does not require individuals to consider underlying ethical issues or moral principles. It is assumed that moral principles were evaluated by those who wrote the law or regulation and that observing the law will result in ethically acceptable behavior (Dokecki & Zaner, 1986).

Laws or regulations can be fallible. The important difference between regulations and professional codes is in the response of their adherents to a law that requires an ethical violation. The professional may violate the law and pay the price, whereas the legalist must obey the law and try to change it. Principle 1.02 of the APA Ethical Principles states that “If psychologists' ethical responsibilities conflict with the law, psychologists make known their commitment to the Ethics Code and take steps to resolve the conflict in a responsible manner” (1992, p. 1600). The term “resolve” is not defined; thus, under certain circumstances, it might be reasonable to go to jail for contempt of court rather than release confidential information.

The development of federal regulations concerning the welfare of human subjects involved in research and the controversies surrounding the regulations have been described by McCarthy (1981) and Gray (1982). Most of the controversy has stemmed from extensions of policies or rulings generated from abuses in biomedical research to social research or similar extensions from specifically funded research projects to all projects under the auspices of institutions receiving federal funding. The debates ensued over a 15-year period from 1966 to 1981, when the current U.S. Department of Health and Human Services (DHHS) regulations were adopted. These regulations, carved out of the controversy and written with the benefit of extensive input from researchers, were designed to end the controversy while not compromising the protection of human subjects.

The regulations apply only to research conducted or funded by DHHS. But individuals who receive funding for research from DHHS must provide a statement showing how the institution they work for protects the rights and welfare of human subjects. The regulations do not require prior approval by an institutional review board for research not funded by DHHS. Institutions are urged, however, to use institutional review boards and other appropriate procedures to protect human subjects in all research conducted under their auspices. Such boards are given wide latitude for the research they review. The regulations specify particular categories of research as being exempt: research conducted in established or commonly accepted educational settings; research involving the anonymous use of educational tests; research involving anonymous survey or interview procedures, except in instances in which risk of criminal or civil liability exists or the research deals with sensitive behavior such as illegal conduct, drug use, or sexual behavior; research involving the observation of public behavior, with the same exceptions noted under survey and interview studies; and research involving the anonymous collection or study of publicly available data, documents, records, pathological specimens, or diagnostic specimens (Federal Register, 1981). These exemptions have expanded to cover research and demonstration projects involving social security programs (Federal Register, 1983).

The current regulations appear to clarify earlier points of confusion and contention. They indicate that no research involving human subjects should be exempt from federal regulation solely on the basis of belonging to a particular discipline. They also acknowledge that much, if not most, social research involves no more risk than that ordinarily encountered in daily life and that the requirements of participation are sufficiently understood by the public to reduce the need for federal regulations. As a result, research falling in the exempted categories is not required to be reviewed and approved by an institutional review board.
Many federal and state departments have policies regarding the protection of human subjects. The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research summarized the policies of federal agencies in a report written in 1978. Most governmental regulations are modeled after those issued by the U.S. Department of Health and Human Services. Agency rulings, principally those of DHHS, have been the predominant form of government regulation.

Laws passed to govern social research are neither very restrictive nor very specific. Professionals resist control by laws because laws tend to blur particular circumstances in specific cases. And there is a fear that resolving ethical issues through legal statutes will compromise the quality of knowledge development by exaggerating the potential for risk in social research.
Government regulation should remain relatively stable for a number of years. It is possible that the revolutionary changes in information technology will inspire renewed attempts to promote legalism in government regulation. But today's researchers have learned much from the issues and debates over the past 30 years, and their responsive and sensitive awareness of ethical issues probably will work to maintain a balance among self-imposed constraints, guidance from professional associations, and government regulation.

REFERENCES
Alexander, D. (1986). Decision making for research involving persons with severe mental retardation: Guidance from the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. In P. R. Dokecki & R. M. Zaner (Eds.), Ethics of dealing with persons with severe handicaps: Toward a research agenda (pp. 39–52). Baltimore: Paul H. Brookes.
American Psychological Association (1992). Ethical principles of psychologists and code of conduct. American Psychologist, 47(12), 1597–1611.
Appelbaum, P. S., Roth, L. H., Lidz, C. W., Benson, P., & Winslade, W. (1987). False hopes and best data: Consent to research and the therapeutic misconception. Hastings Center Report, 17(2), 20–24.
Appell, G. N. (1974). Basic issues in the dilemmas and ethical conflicts in anthropological inquiry.  Module, 19, 1–28.
Azrin, N. H., Holz, W., Ulrich, R., &  Goldiamond, I. (1961). The control of the content of conversation through reinforcement. Journal of Experimental Analysis of Behavior, 4, 25–30.
Banfield, E. (1958). The moral basis of a backward society. New York: Free Press.
Barber, B. (1967). Experimenting with humans.  Public Interest, 6, 91–102.
Campbell, D., Sanderson, R. E., & Laverty, S. G. (1964). Characteristics of a conditioned response in human subjects during extinction trials following a single traumatic conditioning trial.  Journal of Abnormal and Social Psychology, 68, 627–639.
Clark, R. D., & Word, L. E. (1974). Where is the apathetic bystander?  Situational characteristics of the emergency.  Journal of Personality and Social Psychology, 29, 279–287.
Diener, E., & Crandall, R. (1978). Ethics in social and behavioral research. Chicago:  University of Chicago Press.
Dokecki, P. R., & Zaner, R. M. (Eds.). (1986). Ethics of dealing with persons with severe handicaps: Toward a research agenda. Baltimore: Paul H. Brookes.
Eysenck, H. (1977). The case of Sir Cyril Burt:  On fraud and prejudice in a scientific controversy. Encounter, 48, 19–24.
Fairchild, H. H. (1991). Scientific racism: The cloak of objectivity. Journal of Social Issues, 47(3), 101–115.
Federal Register. (1981, January 26). 46(16), 8386–8392.
Federal Register. (1983, March 4). 48(9), 9269–9270.
Fields, C. M. (1984).  Professors' demands for credit as co-authors of students' research projects may be rising. Chronicle of Higher Education, 7, 10.
Fletcher, J. (1966).  Situation ethics. Philadelphia: Westminster Press.
Gay, C. (1973, November 30).  A man collapsed outside a UW building. Others ignore him.  What would you do? University of Washington Daily, pp. 14–15.
Gray, B. H. (1982). The regulatory context of social and behavioral research. In T. L. Beauchamp, R. R. Thaden, R. J. Wallace, Jr., & L. Walters (Eds.), Ethical issues in social science research (pp. 327–344). Baltimore:  Johns Hopkins University Press.
Horowitz, I. L. (Ed.). (1967).  The rise and fall of Project Camelot. Cambridge, MA: MIT Press.
Humphreys, L. (1970).  Tearoom trade:  Impersonal sex in public places. Chicago:  Aldine.
Hyman, H. H., Cobb, W. J., Feldman, J. J., Hart, C. W., & Stember, C. W. (1954).  Interviewing in social research. Chicago:  University of Chicago Press.
Jaremko, M. E. (1984). Stress innoculation training for social anxiety, with emphasis on dating anxiety. In D. Meichenbaum & M. E. Jaremko (Eds.), Stress reduction and prevention (pp. 419–485). New York: Plenum Press.
Jensen, A. R. (1977). Did Sir Cyril Burt fake his research on heritability of intelligence?  Phi Delta Kappan, 58, 471–492.
Klein, R. E., Habicht, J. P., & Yarbrough, C. (1973). Some methodological field studies on nutrition and intelligence. In D. J. Kallen (Ed.), Nutrition, development and social behavior (pp. 64–83). Washington, DC: U.S. Government Printing Office.
Lampman, R. J. (1985). Social welfare spending: Accounting for changes from 1950–1978. San Diego: Academic Press.
Lenski, G. (1963). The religious factor: A sociologist's inquiry. Garden City, NY:  Doubleday.
Lopreato, J. (1967).  Peasants know more. San Francisco: Chandler.
Macdonald, M., & Sawhill, I. V. (1978).  Welfare policy and the family. Public Policy, 26(1), 89–119.
Mayer, K. H. (1985). The epidemiological investigation of AIDS. Hastings Center Report, 15(4), 12–15.
McCarthy, C. R. (1981). The development of federal regulations for social science research. In A. J. Kimmel (Ed.), Ethics of human subject research (pp. 31–39). San Francisco: Jossey-Bass.
McNemar, Q. (1960). At random: Sense and nonsense. American Psychologist, 15, 195–300.
Menges, R. J. (1973). Openness and honesty versus coercion and deception in psychological research.  American Psychologist, 28,  1030–1034.
Milgram, S. (1963). Behavioral study of obedience. Journal of Abnormal and Social Psychology, 67, 371–378.
Milgram, S. (1964). Issues in the study of obedience: A reply to Baumrind. American Psychologist, 19, 848–852.
Milgram, S. (1974). Obedience to authority. New York: Harper & Row.
Moynihan, D. P. (1965).  The Negro family:  The case for national action. Washington, DC:  Office of Planning and Research, U.S. Department of Labor.
Murray, T. (1980). Learning to deceive.  Hastings Center Report, 10(2), 1–16.
Nagel, S. S. (1990). Professional ethics in policy evaluation: Ends and methods. Policy Studies Journal, 19(1), 221–234.
National Association of Social Workers (1994). NASW code of ethics. Washington, DC: Author.
Panem, S. (1985). AIDS: Public policy and biomedical research. Hastings Center Report, 15(4), 23–26.
Piliavin, J. A., & Piliavin, I. M. (1972).  Effect of blood on reactions to a victim.  Journal of Personality and Social Psychology, 23, 353–361.
Reynolds, P. D. (1975).  Ethics and status:  Value dilemmas in the professional conduct of social science. International Social Science Journal, 27(4), 563–611.
Ring, K., Wallston, K., & Corey, M. (1970).  Mode of debriefing as a factor affecting subjective reaction to a Milgram-type obedience experiment and ethical inquiry.  Representative Research in Social Psychology, 1, 67–88.
Rushton, J. P. (1990). Sir Francis Galton, epigenetic rules, genetic similarity theory, and human life-history analysis. Journal of Personality, 58(1), 117–140.
Ryan, W. (1967). Savage discovery: The Moynihan report. In L. Rainwater & W. L. Yancey (Eds.), The Moynihan Report and the politics of controversy (pp. 453–478). Cambridge, MA: MIT Press.
Smith, R. E., Wheeler, G., & Diener, E. (1975).  Faith without works: Jesus people resistance to temptation and altruism.  Journal of Applied Social Psychology, 5(4), 320–330.
Turnbull, A. P., Blue-Banning, M., Behr, S., & Kerns, G. (1986). Family research and intervention: A value and ethical examination. In P. R. Dokecki & R. M. Zaner (Eds.), Ethics of dealing with persons with severe handicaps: Toward a research agenda (pp. 119–140). Baltimore: Paul H. Brookes.
Vanderpool, H. Y., & Weiss, G. B. (1987). False data and last hopes: Enrolling ineligible patients in clinical trials.  Hastings Center Report, 17(2), 16–19.
Vaughan, T. R. (1967). Governmental intervention in social research: Political and ethical dimensions in the Wichita jury recordings. In G. Sjoberg (Ed.), Ethics, politics, and social research (pp. 50–77). Cambridge, MA: Schenkman.
Volberding, P., & Abrams, D. (1985). Clinical care and research in AIDS. Hastings Center Report, 15(4), 16–18.
Walster, E. (1965). The effect of self-esteem on romantic liking. Journal of Experimental Social Psychology, 1, 184–197.
Warwick, D. P. (1982). Types of harm in social research. In T. L. Beauchamp, R. R. Faden, R. J. Wallace, Jr., & L. Walters (Eds.), Ethical issues in social science research (pp. 101–124). Baltimore: Johns Hopkins University Press.
Weitz, R. (1987). The interview as legacy: A social scientist confronts AIDS. Hastings Center Report, 17(3), 21–23.
Wolf, D. A. (1984). Changes in household size and composition due to financial incentives. Journal of Human Resources, 19(1), 87–101.
Wolins, L. (1962). Responsibility for raw data. American Psychologist, 17, 657–658.
Zimbardo, P. G., Haney, C., Banks, W. C., &  Jaffe, D. (1973).  The mind is a formidable jailer:  The Pirandellian prison. New York Times Magazine, 122(8), 38–60.

FURTHER READING
American Psychological Association. (1985). Standards for educational and psychological testing. Washington, DC: Author.
Bermant, G., Kelman, H. C., & Warwick, D. P. (1978). The ethics of social intervention. Washington, DC: Hemisphere.
Carroll, J. D., & Knerr, C. R. (1976, Fall). The APSA Confidentiality in Social Science Research Project: A final report. P.S., pp. 416–419.
Kimmel, A. J. (1988). Ethics and values in applied social research. Newbury Park, CA: Sage Publications.
Lee, R. M. (1993). Doing research on sensitive topics. Newbury Park, CA: Sage Publications.
Mathews, M. C. (1988). Strategic intervention in organizations: Resolving ethical dilemmas. Newbury Park, CA: Sage Publications.
Sieber, J. E. (1992). Planning ethically responsible research: A guide for students and internal review boards. Newbury Park, CA: Sage Publications.

David F. Gillespie, PhD, is professor, Washington University, George Warren Brown School of Social Work, One Brookings Drive, St. Louis, MO 63130.

For further information see
Agency-Based Research; Bioethical Issues; Epistemology; Ethics and Values; Intervention Research; Licensing, Regulation, and Certification; Patient Rights; Professional Liability and Malpractice; Qualitative Research; Research Overview; Survey Research.

Key Words
ethics
professional liability
social work research

Ethics Reader’s Guide
Reader’s Guide
Ethics
The following entries contain information on this general topic:
 Bioethical Issues
 Civil Rights
 End-of-Life Decisions
 Ethical Issues in Research
 Ethics and Values
 Human Rights
 NASW Code of Ethics (Appendix 1)
 Patient Rights
 Professional Conduct
 Professional Liability and Malpractice