Stanford Encyclopedia of Philosophy
This is a file in the archives of the Stanford Encyclopedia of Philosophy.

The Ethics of Clinical Research

First published Fri Jan 30, 2009

Clinical research attempts to address a relatively straightforward, and extremely important challenge: how do medical practitioners determine whether a potential new intervention represents an advance over current methods, whether the new intervention would avoid harms currently incurred, whether it would save lives currently lost? Clinicians may one day be able to answer these questions by using computer models, thereby avoiding reliance on clinical research and the risks it entails. Until that day, clinical researchers test potential new medical interventions in the laboratory, and often in animals. While these methods can provide important information and, in the case of animal research, raise important ethical issues of their own, potential new interventions eventually must be tested in humans. Potential new interventions which work miracles in test tubes and rats, often leave humans untouched, or worse off.

Those who are the first to undergo a potential new medical intervention invariably face some, possibly serious risks, no matter how much prior testing has occurred in the laboratory, and in animals. One might attempt to locate these initial human evaluations, and the attendant risks, within the clinical setting, offering potential new interventions to patients who want to try them. This approach, which has the virtue of evaluating new interventions in the process of trying to help individual patients, poses enormous scientific and practical problems. On the practical side, who would be willing to manufacture a new intervention without knowing first whether it works? What dose should be used? How often should the new drug be taken? Most importantly, this approach might not yield reliable information that the new treatment is harmful until hundreds, perhaps thousands of people have been harmed.

Clinical research is designed to address these concerns by systematically exposing a small group of individuals, sometimes very sick ones, to potential new treatments. These treatments may provide some benefit, but more often they do nothing but perhaps make those who take them more sick. The process of evaluating new medical interventions also typically requires procedures, such as blood draws, lumbar punctures, and skin biopsies, which are necessary for scientific purposes but offer essentially no chance of clinical benefit. I will refer to research interventions and procedures which do not offer those who undergo them a compensating potential for clinical benefit as ‘non-beneficial’ research interventions and procedures. Clinical trials to evaluate potential new treatments often rely on numerous prior, non-beneficial studies. These might include phase 1 trials with normal volunteers to evaluate the toxicity of the treatment and even more preliminary trials to characterize non-affected individuals and determine the pathophysiology of the disease under study.

By relying on non-beneficial procedures and studies, clinical research poses a very practical and practically vital example of one of the most fundamental concerns in moral theory. When is it acceptable to expose some to risks of harm for the benefit of others? Most theorists assume that the answer to this question cannot be as simple as “whenever individuals consent.” Even assuming valid informed consent, something which, the data reveals, is decidedly easier to describe than obtain, the question remains of when it is ethically acceptable for investigators to invite individuals to participate in clinical research and actively expose them to risks in that context for the benefit of others. The present entry focuses on this question, and canvasses the most prominent attempts to answer it. The present entry thus largely ignores the range of interesting and important ethical issues that arise in the conduct of clinical research: how it should be reviewed and approved, who may conduct it, how it should be conducted, and whether and under what circumstances it is acceptable to enroll in clinical research those who cannot give informed consent.


1. What is clinical research?

Human subjects research is research which involves humans, as opposed to animals, atoms, or asteroids, as the object of study. A study to evaluate whether subjects prefer 100 dollars or a 1% chance of 10,000 dollars constitutes human subjects research. Clinical research, as discussed here, refers to the subset of human subjects research that focuses on improving human health and well-being, typically by identifying better methods to treat, cure or prevent illness. Defining the focus of clinical research as that of improving health and well-being by treating, curing and preventing illness is intended to bracket the tangential question of whether research on enhancements qualifies as clinical research. Such research has the potential to improve well-being, allowing us to live longer and better, without identifying methods to address illness.

While clinical medicine is enormously better than it was 100 or even 50 years ago, there remain many diseases against which current clinical medicine offers an inadequate response. To name just a few, many cancers are incurable; chronic diseases, chief among them heart disease and stroke, kill millions each year, and there currently are no effective treatments for Alzheimer's disease. The social value of clinical research lies in its ability to collect information that might be useful to identifying improved methods to treat these conditions. Yet, it is the rare clinical research study which definitively establishes that a particular method is effective and safe for treating, curing or preventing some illness. The success of specific research studies more commonly lies in the gathering of information needed to inform future studies. Prior to establishing the efficacy of an experimental treatment for a given condition, researchers typically need to identify the cause of the condition, possible mechanisms for treating it, a safe and effective dose of the treatment in question, and ways of testing whether the drug is having an effect on the disease.

The future oriented aspect of clinical research is worth emphasizing. The fundamental ethical concern raised by clinical research is whether and when it can be acceptable to expose some to risks and burdens for the benefit of others. The answer to this question depends crucially on the others in question, and their relationship to those who are being exposed to the risks. It is one thing to expose a consenting adult to risks to save the health or life of an identified and present other, particularly when the two individuals are first degree relatives. It is another thing, or seems to many to be another thing, to expose consenting individuals to risks to help unknown and unidentified, and possibly future others.

Almost no one objects to operating on a healthy, consenting adult to obtain a kidney that might save an ailing sibling, even though the operation poses some risk of serious harm to the donor. Greater concern is raised by attempts to take a kidney from a healthy, consenting adult and give it to an unidentified individual. Even greater ethical concern arises as the path from risk exposure to benefit becomes longer and more tenuous. Many clinical research studies expose subjects to risks in order to collect generalizable information which, if combined with the results of other, as yet non-existent studies, may eventually benefit future patients assuming the appropriate regulatory authorities approve it, and some company or group chooses to manufacture it. The potential benefits of clinical research may thus be realized someday, but the risks and burdens are clear and present. Even research studies which offer a compensating potential for clinical benefit, for example, those which involve a previously validated intervention, tend to rely on non-beneficial procedures, extra scans, additional visits and blood draws.

Recognizing that the ethics of clinical research focuses largely on identifying the conditions under which it is acceptable to expose some individuals to risks and burdens for the benefit of others, it is worth noting that clinical research also raises the question of when it is acceptable to ask individuals to contribute to answering the scientific question posed by a given study (Jonas 1969). The frequent neglect of this issue may trace to an impoverished analysis of subjects' interests. Individuals undoubtedly have an interest in avoiding the kinds of physical harms they face in clinical research. Individuals' interests are also implicated, and sometimes thwarted, when they contribute to particular projects, activities and goals. The interests of an individual who fundamentally opposes cloning may be set back substantially if they contribute to a research study that identifies improved methods to clone human beings. A comprehensive analysis of the ethics of clinical research should recognize and protect these interests as well.

Attempts to determine when it is acceptable to conduct clinical research have been significantly influenced by its history, by how it has been conducted and, in particular, by how it has been misconducted. Significant abuses litter the history of clinical research (Lederer 1995; Beecher 1966). To understand the current state of the ethics of clinical research it will be useful to consider these sources and origins.

2. A Brief History

Modern clinical research may have begun on the 20th of May, 1747, aboard the HMS Salisbury. James Lind, a surgeon in the Royal Navy serving on the Salisbury, was personally acquainted through reading and service with the costs scurvy was exacting on British sailors. Lind was particularly influenced by reports of Admiral Lord Anson's circumnavigatory expedition of 1740 (Anson 1748). Anson's crews suffered appalling losses to scurvy, including one ship which lost almost half its crew to the disease.

A number of common treatments were in use at the time to treat scurvy, including cider, elixir of vitriol, vinegar, and sea-water. Lind, skeptical of several of them, designed a study to test whether he was right. He chose 12 sailors from among the 30 or 40 Salisbury's crew members who were suffering from scurvy at the time, and divided them into six groups of 2 sailors. Lind assigned one of the prevailing treatments to each of the groups, including two lucky subjects who received 2 oranges and 1 lemon each day. Within a week they were nearly healthy again, while the health of the other subjects had declined significantly.

Although Lind noted a dramatic treatment effect from citrus rations, his findings were largely ignored for decades, leading to uncounted and unnecessary deaths, and highlighting the importance of combining clinical research with clinical implementation. The Royal Navy did not adopt citrus rations until 1795 (Sutton 2003), at which point scurvy disappeared from the Royal Navy. The story of scurvy also tragically highlights the important challenge, present to this day, of disseminating research results. In his classic account of life at sea, Two Years Before the Mast, Richard Henry Dana (1914) describes American sailors in 1835, still debating the proper treatment for scurvy, and dying from the lack of it.

Lind's experiments are regarded as perhaps the first modern clinical trial because he attempted to address one of the primary challenges faces those who attempt to evaluate medical treatments. How does one show that the comparative results of two or more treatments are a result of the treatments themselves, and not a result of the patients who received them, or other differences in their environment or diet? How could Lind be confident that the improvements in the two patients were the result of the oranges and lemons, not a result of the fact that he happened to give this particular treatment to the two patients who were going to get better anyway? Lind tried to address this question by beginning with patients who were as similar as possible. He carefully chose the 12 subjects for his experiment from a much larger pool of ailing sailors; he also tried to ensure that all 12 received the same rations each day, apart from the treatments provided as part of his clinical study.

Lind's experiments, despite controlling for a number of factors, did not exclude the possibility that his own choices of which sailors got which treatment influenced the outcome of his experiment. More recent experiments, including the first modern randomized, placebo controlled trial of Streptomycin for TB in 1948 (D'Arcy Hart 1999), attempt to address this concern by assigning treatments to patients using a random selection process. By randomly assigning patients to treatment groups these studies ushered in the modern era of controlled, clinical trials. And, by taking the choice of which treatment a given patient receives out of the hands of the treating clinician, these trials underscore and, some argue, exacerbate the ethical concerns raised by clinical research (Hellman and Hellman 1991).

Clinical research often is judged by the extent to which the treatment of research subjects diverges from standard clinical care (Miller & Weijer 2006; Rothman 2000). A foundational principle of clinical medicine is the importance of individual judgment. A physician who decided which treatments her patients receive by flipping a coin would be guilty of malpractice. A clinical investigator who uses the same methods is relying on the gold standard for ensuring the scientific validity of clinical trials. Randomized clinical trials do not use truly random treatment assignment for the simple reason that random assignment has the potential to yield significant differences between treatment groups (Albert and Borkowf 2002). At the extreme, a random process might assign all the subjects in a clinical trial to the same treatment. Recognizing possibilities of this sort, and despite the name, randomized clinical trials do not rely on random processes to allocate interventions. Instead, they rely on a number of complex strategies, including block assignment, stratification and minimization (Spilker 1991) to maximize the chances that the treatment groups are as (relevantly) similar as possible. While these processes are not random in the strict sense, they are random in the sense that they do not assign interventions based on a judgment of which intervention would best for the individual participants who receive them. The best scientific methods, then, appear to undermine patients' medical care and, thereby, seem to sacrifice the interests of some, often sick vulnerable patients, for the benefit of future patients.

3. Guidelines

Perhaps the most prominent guidelines for clinical research, the Nuremberg Code, are the result of the court's judgment of atrocities committed by Nazi doctors during World War II (Grodin & Annas 1996; Shuster 1997). The Nuremberg Code (1947) is often regarded as the first set of formal guidelines for clinical research, an ironic claim on two counts. First, there is some debate over whether the Nuremberg Code was intended to apply generally to clinical research or whether, as a legal ruling in a specific trial, it was intended to address only the cases before the court (Katz 1996). Second, the Nuremberg Code is not the first set of research guidelines; the Germans themselves had developed systematic guidelines in 1931 (Vollmann & Winau 1996). These guidelines were still legally in force at the time of the Nazi atrocities and clearly prohibited a great deal of what the Nazi doctors did.

In addition to being ignored by practicing researchers, wide consensus developed by the end of the 1950s that the Nuremberg Code was inadequate to the ethics of clinical research. Representatives of the World Medical Association began meeting in the early 1960s to develop guidelines which would become known as the Declaration of Helsinki and would, it was hoped, address the perceived shortcomings of the Nuremberg Code (Goodyear M.D., Krleza-Jeric K., Lemmens T. 2007). Specifically, the Nuremberg Code did not include a requirement that clinical research receive independent ethics review and approval. In addition, the first and longest principle in the Nuremberg Code states that informed consent is “essential” to ethical clinical research (Nuremberg Military Tribunal 1947). While this requirement seems at first plausible, it appears to preclude clinical research with individuals who cannot consent.

One could simply insist that informed consent of the subject is necessary to ethical clinical research and accept the opportunity costs thus incurred. The framers of the Declaration of Helsinki hoped to avoid these costs. They recognized that insisting on informed consent as a necessary condition for all clinical research would preclude a good deal of research designed to find better ways to treat dementia and conditions affecting children, as well as research in emergency situations. Regarding consent as necessary precludes such research even when it poses only minimal risks or offers subjects a compensating potential for important clinical benefit.

The Declaration of Helsinki (World Medical Organization 1996) allows individuals who cannot consent to be enrolled in clinical research based on the permission of the subject's representative. The U.S. federal regulations governing clinical research take the same approach. These regulations are not laws in the strict sense of being passed by Congress and applying to all research conducted on U.S. soil. Instead, the regulations represent administrative laws which attach to clinical research at the beginning and the end. Research that is conducted using U.S. federal monies, for instance, research funded by the NIH or involving NIH researchers, must follow the U.S. regulations (Department of Health and Human Services 2005) and research that applies for approval from the U.S. FDA must have been conducted according to the FDA regulations which, except for a few exceptions, are essentially the same. Although many countries now have their own national regulations (Brody 1998), the U.S. regulations continue to exert enormous influence around the world because so much clinical research is conducted using US federal money and U.S. federal investigators.

The U.S. regulations, like many regulations, place no clear limits on the risks to which competent and consenting adults may be exposed. In contrast, strict limits are placed on the level of research risks to which those unable to consent may be exposed, particularly children. In the case of pediatric research, the standard process for review and approval is limited to studies that offer a ‘prospect of direct’ benefit and research that poses minimal risk or a minor increase over minimal risk. Studies that do not qualify in one of these categories must be reviewed by an expert panel and approved by a high government official (i.e. Secretary of the department). While these regulations allow for important flexibility, they do not, at least in principle, establish a ceiling on the risks to which pediatric research subjects may be exposed for the benefit of others.

4. Clinical Research and Clinical Care

The history of clinical research underscores and reinforces the close relationship between clinical research and clinical care. Clinical research often is conducted with patients and often is conducted by physicians. As mentioned, the ethics of clinical research tends to focus on whether and to what extent the treatment of research subjects diverges from the norms of clinical care. One school of thought attempts to justify clinical research by arguing that it should not diverge from the norms of clinical care in the sense that subjects should not be denied any beneficial treatment that is available in the clinical setting and should not be exposed to any risks that are not present in the clinical setting.

This view generally is defended on the basis of one of two arguments. Some proponents (Rothman 2000) argue that it is implied by the kind of treatment that patients, understood as individuals who have a condition or illness needing treatment, are owed. Such individuals are owed treatment that promotes, or at least is consistent with their medical interests. Others (Miller & Weijer 2006) argue that the norms of clinical research derive largely from the obligations that bear on physicians and other clinicians. These commentators argue that it is unacceptable for a physician to participate in, or even support the participation of one of her patients in a clinical trial unless that trial is consistent with the patients' medical interests. To do less is to provide substandard medical treatment and to violate one's obligations as a clinician. Critics of this approach try to distinguish between the ethics of clinical research and the ethics of clinical care, arguing that it is inappropriate to assume that investigators are subject to the claims and obligations which apply to physicians, despite the fact that the individuals who conduct clinical research often are physicians (Miller and Brody 2007).

The claim that the treatment of research subjects must be consistent with good clinical care has been applied most prominently to the ethics of randomized clinical trials (Hellman & Hellman 1991). Randomized trials determine which treatment a given research subject receives based on a random process, not based on clinical judgment of which treatment would be best for that patient. Because this aspect of clinical research represents a clear departure from the practice of clinical medicine it appears to sacrifice the interests of subjects in order to collect scientific information.

Many commentators (Freedman 1987) argue that randomization is acceptable only when the study in question satisfies what has come to be known as ‘clinical equipoise.’ Clinical equipoise obtains when, for the population of patients from which subjects will be selected, the available evidence does not provide reason to favor one of the treatments being used over the others. In addition, it must be the case that there are no treatments available outside the trial that are better than those used in the trial. Satisfaction of these conditions seems to imply that the interests of research subjects will not be undermined in the service of collecting scientific information. If the available data do not favor any of the treatments being used, randomizing subjects seems as good a process as any for choosing which treatment they receive.

Critics argue that even when clinical equipoise obtains for the population of patients, the specific circumstances of individual patients within that population may imply that one of the treatments under investigation is better for them Gifford 2007). A specific patient may have reduced liver function which places her at greater risk of harm if she receives the treatment that is metabolized by the liver. And some patients may have personal preferences which incline them toward one treatment rather than another (e.g. they may prefer a one-time riskier procedure to multiple, lower risk procedures which pose the same collective risk). Current debate focuses on whether randomized clinical trials can take these possibilities into account in a way that is consistent with the norms of clinical medicine.

Even if clinical equipoise can be used to justify at least some randomized clinical trials, a significant problem remains. Clinical equipoise cannot be used to justify all of the important types of clinical research that are regularly undertaken. The primary challenge for the claim that clinical research must be consistent with the norms of clinical medicine is that certain studies and procedures which are crucial to the identification and development of improved methods for protecting and advancing health and well-being are clearly inconsistent with individual subjects' medical interests. This concern arises for the non-beneficial studies needed to determine what dose to use of a potential new treatment.

This evaluation requires assessment of what dose is both safe for humans and likely to be effective. To make this determination investigators often conduct brief studies with a few subjects in which they give them single doses. These studies, called pharmacokinetic and pharmaco-dynamic studies, offer subjects essentially no chance for medical benefit and pose at least some risks, and to that extent are inconsistent with the subjects' medical interests. Even when investigators are in a position to conduct a study that satisfies clinical equipoise, they typically need to include some non-beneficial procedures, such as additional blood draws, to evaluate the drugs being tested. These studies may be in subjects' medical interests in the sense that the overall risk-benefit ratio that the study offers them is at least as favorable as the available alternatives. However, this type of study-level evaluation masks the fact that the study includes individual interventions which are contrary to the subjects' medical interests, and contrary to the norms of clinical medicine.

Some commentators attempt to justify these studies and procedures by distinguishing between ‘therapeutic’ and ‘non-therapeutic’ research. Proponents claim that the demand of consistency with subjects' medical interests applies only to therapeutic research; non-therapeutic research studies and procedures may diverge from these norms to a certain extent, provided subjects' medical interests are not significantly compromised. The distinction between therapeutic and non-therapeutic research is sometimes based on the design of the studies in question, or based on the intentions of the investigators. Studies designed to benefit subjects, or investigators who intend to benefit subjects are conducting therapeutic studies. Those designed to collect generalizable knowledge or in which the investigators intend to do so constitute non-therapeutic research.

The problem with the distinction so defined is that research itself typically is defined as a practice designed to collect generalizable knowledge, conducted by investigators who intend to achieve this end (Levine 1988). It follows that all research qualifies as non-therapeutic. Conversely, most investigators intend to benefit their subjects in some way. Perhaps they design the study in a way that provides them with clinically useful findings, or they provide minor care not required for research purposes, or referrals to colleagues. Even if one can make good on the distinction in theory, these practices appear to render it irrelevant to the practice of clinical research. More importantly, it is not clear why investigators' responsibilities to patients, or patients' claims on investigators should vary as a function of this distinction. It is not clear how one might defend the claim that investigators are allowed to expose patients to some risks for the benefit of others, but only in the context of research that is not designed to benefit the subjects.

To take one possibility, it is not clear that this view can be defended by appeal to physicians' role responsibilities. A prima facie plausible view holds that physicians' role responsibilities apply to all encounters between physicians and patients who need medical treatment. This view would imply that physicians may not compromise patients' medical interests when conducting therapeutic studies, but also seems to prohibit non-therapeutic research procedures with patients. Alternatively, one might argue that physicians' role responsibilities apply only in the context of clinical care and so do not apply in the context of clinical research at all. This articulation yields a more plausible view, but does not support the use of the therapeutic/ non-therapeutic distinction. It provides no reason to think that physicians' obligations differ based on the type of research in question.

The claim that clinical research must satisfy the norms of clinical medicine does have this strong virtue: it provides a clear method to protect individual research subjects and reassure the public that they are being so protected. If research subjects must be treated consistent with their medical interests, we can be reasonably confident that improvements in clinical medicine will not be won at the expense of exploiting them. Most accounts of the ethics of clinical research now recognize the limitations with this approach and struggle to ensure that research subjects are not exposed to excessive risks in the context of research (Emanuel, Wendler, Grady 2000; CIOMS 2002). Dismissal of the distinction between therapeutic and non-therapeutic research thus yields an increase in both conceptual clarity and concern regarding the potential for exploitation.

Clinicians, first trained as physicians taught to act in the best interests of the patient in front of them, often struggle with the process of exposing some patients to risky procedures for the benefit of others. It is one thing for philosophers to insist, no matter how accurately, that research subjects are not patients and need not be treated according to the norms of clinical medicine. It is another thing for clinical researchers to regard research subjects who are suffering from disease and illness as anything other than patients. These clinical instincts, while understandable and laudable, have the potential to obscure the true nature of clinical research, as investigators and subjects alike try to convince themselves that clinical research involves nothing more than the provision of clinical care.

5. Minimal Risks

The fundamental ethical concern raised by clinical research is the fact that research subjects are exposed to risks for the benefit of others, thus raising concern of exploitation. Understood in this way, and recognizing the need to conduct non-beneficial research procedures and studies, many commentators and guidelines maintain that clinical research is acceptable provided the net risks to which subjects are exposed are sufficiently low. The challenge, currently faced by many in clinical research, is to identify a standard, and find a reliable way to implement it, for what constitutes a sufficiently low risk in this context.

Some argue that the risks of clinical research qualify as sufficiently low when they are ‘negligible’, understood as risks that do not pose any chance of serious harm (Nicholson 1986). Researchers who ask children a few questions for research purposes may expose them to risks no more worrisome than that of being mildly upset for a few minutes. It seems not implausible that exploitation requires some risk or realization of serious harm, implying that this study at least raises no concerns regarding exploitation. Or one might argue that the possible harms posed by this study are so insignificant that the potential for exploitation does not constitute a serious ethical concern.

Despite the theoretical plausibility of these views, very few actual studies satisfy the negligible risk standard. Even routine procedures that are widely accepted in pediatric research, such as single blood draws, pose some, typically very low risk of more than negligible harm. Others (Kopelman 2000; Resnik 2005) define risks as sufficiently low or ‘minimal’ when they do not exceed the risks individuals face during the performance of routine examinations. The concern with this standard is that the risks of routine medical procedures for healthy individuals are extremely low to the extent that this standard prohibits a good deal of clinical research, including studies that seem intuitively acceptable. This approach faces the additional problem that, as the techniques of clinical medicine become safer and less invasive, increasing number of procedures used in non-beneficial research would be deemed excessively risky.

Many guidelines (U.S. Department of Health and Human Services 2005; Australian National Health and Medical Research Council 1999) and commentators take the view that non-beneficial research is ethically acceptable as long as the risks do not exceed the risks subjects face in daily life. The strength of this claim is supposed to derive from the fact that such research does not increase the risks to which the subjects are exposed. However, its intuitive appeal often traces to a common attitude regarding the risks of daily life. Many of those involved in clinical research implicitly assume that the minimal risk standard is essentially equivalent to the negligible risk standard. If the risks of research are no greater than the risks individuals face in daily life, then the research does not pose risk of any serious harm. As an attitude toward many of the risks we face in daily life, this view makes sense. We could not get through our daily lives if we were conscious of all the risks we face. Crossing the street poses more risks than one can catalog, much less process readily. When these risks are sufficiently low, psychologically healthy individuals place them in the background so to speak, ignoring them unless the circumstances provide reason for special concern (e.g. one hears a siren, or sees a large gap in the pavement).

Paul Ramsey reports that during the deliberations of the National Commission on pediatric research, members often used the terms minimal and negligible risks in a way that seemed to imply that they were willing to allow minimal risk research, even with children, on the grounds that such research posed no risk of serious harm (Ramsey 1978). The members then went on to argue that an additional ethical requirement for such research is a guarantee of compensation for any serious research injuries. This approach to minimal risk pediatric research highlights nicely the somewhat confused attitudes we often have toward risks, especially the risks of daily life.

We go about our daily lives as though very low risks are not going to occur, effectively treating low probability events as zero probability events. To this extent, I suspect, we are not Bayesians about the risks of daily life. We treat some possible harms as impossible for the purposes of getting through the day. This attitude, crucial to living our lives, does not imply that there are no serious risks in daily life. The fact that our attitude toward the risks of everyday life is justified by its ability to help us to get through the day undermines its ability to provide an ethical justification for exposing research subjects to the same risks in the context of non-beneficial research (Ross & Nelson 2006).

First, the extent to which we ignore the risks of daily life is not a fully rational process. In many cases, our attitude regarding risks is a function of features of the situation that are not correlated directly with the risk level, such as our perceived level of control and our familiarity with the activity (Tversky, Kahneman 1974; Tversky, Kahneman 1981; Slovic 1987; Weinstein 1989). Second, to the extent that the process of ignoring some risks is rational, we are involved in a process of determining which risks are worth our paying attention to. Some risks are so low that they are not worth paying attention to. Consideration of them would be more harmful (would cost us more) than the expected value of being aware of them in the first place.

To some extent, then, our attitudes in this regard are based on a rational cost/benefit analysis. To that extent, these attitudes do not provide an ethical argument for exposing research subjects to risks for the benefit of others. The fact that the costs to an individual of paying attention to a given risk in daily life are greater than the benefits to that individual does not seem to have any relevance for what risks we may expose them to for the benefit of others. Finally, there is a chance of serious harm from many of the activities of daily life. This reveals that the ‘risks of daily life’ standard does not preclude the chance of some subjects experiencing serious harm. Indeed, one could put the point in a much stronger way. Probabilities being what they are, the risks of daily life standard implies that if we conduct enough minimal risk research eventually a few subjects will die and scores will suffer permanent disability.

As suggested above, a more plausible line of argument would be to defend clinical research that poses minimal risks on the grounds that it does not increase the risks to which subjects are exposed. It seems plausible to assume that at any given time an individual will either be participating in research or involved in the activities of daily life. But, by assumption, the risks of the two activities are essentially equivalent, implying that enrollment in the study, as opposed to allowing the subject to continue to participate in the activities of daily life does not increase the risks to which he is exposed. The problem with this argument is that the risks of research often are additive rather than substitutive.

Participation in a study might require the subject to drive to the clinic for a research visit. The present defense succeeds to the extent that this trip replaces another trip in the car, or some similarly risky activity in which the subject would have been otherwise involved. In practice, this often is not the case. The subject instead may simply put off the car trip to the mall until after the research visit. In that case, the subject's risk of serious injury from a car trip may be doubled as a result of her participation in research. Moreover, we accept many risks in daily life because the relevant activities offer those who pursue them a chance of personal benefit. We allow children to take the bus because we assume that the benefits of receiving an education justify the risks. The fact that we accept these risks given the potential benefits provides no reason to think that the same risks or even the same level of risk would be acceptable in the context of an activity, including a non-beneficial research study which offers no chance of medical benefit.

6. A Libertarian Analysis

One possibility, albeit one rarely pursued in the research ethics literature, would be to adopt a libertarian justification for enrolling individuals in non-beneficial clinical research. On this approach, investigators would be allowed to conduct any research they wanted provided they obtained the free and informed consent of the subjects they propose to enroll. It is worth noting that essentially all current regulations are inconsistent with this approach. Most regulations, beginning with the first Declaration of Helsinki (World Medical Organization 1996), allow investigators to conduct research on human subjects only when it has been approved by an independent group charged with ensuring that the study is ethically acceptable. Most regulations place further limitations on the types of research that independent ethics committees may approve. They must find that the research has important social value and the risks have been minimized before approving it, restricting the types of research that even competent adults may agree to participate in.

One might regard these limitations as betraying the paternalism embedded in most approaches to the ethics of clinical research. Although the charge of paternalism often carries with it some degree of condemnation, there is a strong history of what is regarded as appropriate paternalism in clinical research (Miller & Wertheimer 2007). This too may have evolved from clinical medicine. Alternatively, defenders may regard these limitations as more akin to soft paternalism (Feinberg 1986; see also entry on paternalism). There are good reasons and significant empirical data to question how often the preconditions for the libertarian approach are realized in practice. Research subjects, sometimes because they are ill and vulnerable, other times for unclear reasons, often fail to understand clinical research sufficiently to make their own informed decisions regarding whether to enroll (Flory and Emanuel 2004). Thus, one may regard many of the regulations on clinical research not as inconsistent with the libertarian ideal, but instead as starting from that ideal and recognizing that potential research subjects often fail to attain it.

To briefly illustrate the challenges faced by those who hope to address the moral concerns posed clinical research by obtaining valid informed consent, there is wide agreement that valid consent for enrollment in randomized clinical trials requires individuals to understand the process of randomization. It requires individuals to understand that the treatment they will receive, if they participate in the study, will be determined by a process which does not take into account which of the treatments is best for them (Kupst 2003). The initial assumption that research participants can understand this fact is belied by an impressive and increasing wealth of data which reveals that many, perhaps most individuals who participate in clinical research fail to understand randomization (Snowden 1997; Featherstone and Donovan 2002; Appelbaum 2004).

Even if one accepts, for the sake of argument if nothing else, that potential research participants are in a position to provide free and sufficiently informed consent, it does not follow that they may be enrolled in whatever studies they choose, free from the meddling of ethics review committees and the limitations of research regulations. This conclusion follows from a general principle which holds that the conditions on what one individual may do to another are not exhausted by what the second individual consents to. Perhaps some individuals may choose for themselves to be treated with a lack of respect, even tortured. But, it does not follow that it is acceptable for me or you to treat them accordingly. As independent moral agents we need reason to believe that the treatment in question is appropriate to carry out, and this evaluation concerns, in typical cases, more than the fact that the individual consented to it.

Understood in this way, many of the limitations on the kinds of research to which competent adults may consent are not justified, or at least not solely justified, on paternalistic grounds. Instead, these limitations point to a crucial and often overlooked concern in research ethics. The regulations for clinical research often are characterized as protecting the subjects of research from harm. Although this undoubtedly is an important and perhaps primary function of the regulations, they also have an important role in limiting the extent to which investigators harm research subjects, and limiting the extent to which society supports and benefits from a process which exploits others. It is not just that research subjects should not be exposed to risk of harm without compelling reason. Investigators should not expose them to such risks without compelling reason, and society should not support and benefit from such a project either.

This aspect of the ethics of clinical has strong connections with the view that the obligations of clinicians restrict what sort of clinical research they may conduct. On that view, it is the fact that one is a physician and is obligated to promote the best interests of those with whom one interacts professionally which determines what one is allowed to do to subjects. This connection highlights the pressing questions that arise once we attempt to move beyond the view that clinical research is subject to the norms of clinical medicine. There is a certain plausibility to the claim that a researcher is not acting as a clinician and so may not be subject to the obligations that bear on clinicians. Or perhaps we might say that the researcher/subject dyad is distinct from the physician/patient dyad and is not necessarily subject to the same norms. But, once one decides that we need an account of the ethics of clinical research, as distinct from the ethics of clinical care, one is left with the question of what limitations apply to what researchers may do to research subjects. It seems clear that researchers may not expose research subjects to risks for no good reason, and also clear that this claim applies even to those who provide free and informed consent. It remains unclear what constitutes a sufficient justification in this context.

The libertarian resolution to the challenge of research ethics faces the additional problem that it provides no justification for conducting research with those who are not able to provide informed consent. Or perhaps it implies, consistent with the first principle of the Nuremberg Code, that such research necessarily is unethical. This plausible and tempting view commits one to the view that research with children, research in many emergency situations and research with the demented elderly all are ethically unacceptable. One could consistently maintain such a view but, as mentioned previously, the social costs of adopting it are great. It is estimated, for example, that approximately 70% of medications provided to children have not been tested in children, even for basic safety and efficacy (Roberts, Rodriquez, Murphy, Crescenzi 2003; Field & Behrman 2004; Caldwell, Murphy, Butow, Craig 2004). Absent clinical research with children, pediatricians will be forced to continue to provide sometimes inappropriate treatment, leading to significant harms that could have been avoided by pursuing clinical research to identify better approaches. In the next section, we shall return to the fact that the failure to conduct clinical research almost certainly would lead to significantly more harms than it avoids.

7. Goals and Interests

In one of the most influential papers in the history of research ethics, Hans Jonas (1969) argues that the progress clinical research offers is normatively optional, whereas the need to protect individuals from the harms to which clinical research exposes them is mandatory. He writes: “unless the present state is intolerable, the melioristic goal [of biomedical research] is in a sense gratuitous, and this is not only from the vantage point of the present. Our descendants have a right to be left an unplundered planet; they do not have a right to new miracle cures. We have sinned against them if by our doing, we have destroyed their inheritance not if by the time they come around arthritis has not yet been conquered (unless by sheer neglect).”

This view does not imply that clinical research is necessarily unethical, but the conditions on when it may be conducted are very strict. This argument may seem plausible to the extent that one regards, as Jonas does, the benefits of clinical research to be ones that make an acceptable state in life even better. The example of arthritis cited by Jonas characterizes this view. Curing arthritis, like curing dyspepsia, baldness, and the minor aches and pains of living and aging, would be nice, but seems to address no profound problem in our lives. If this were all that clinical research has to offer, we should indeed be reluctant to accept many risks in order to achieve its goals. We should not, in particular, take the chance of wronging individuals, or exploiting them to realize these goals.

This argument makes sense to the extent that one regards the status quo as acceptable. Yet, without further argument, it is not clear why one should accept this view; it seems almost certain that those suffering from serious illness that might be addressed by future research will not accept it. The first pass, then, is to understand Jonas as arguing that the present state of affairs happens to involve sufficiently good medicine and adequately flourishing lives such that the needs which could now be addressed by additional clinical research are not of sufficient importance to justify the risks raised by conducting it. It might have been the case, at some point in the past, that life was sufficiently brutish and short to justify running the risk of exploiting research subjects in the process of identifying through clinical research ways to improve the human lot. But, we have advanced, in part thanks to the conduct of clinical research, well beyond that point.

This reading need not interpret Jonas as ignoring the fact that there remain ills to be cured. Instead, he might be arguing that these ills, while real and unfortunate, are not of sufficient gravity, or perhaps prevalence to justify the risks of conducting clinical research. This reading of Jonas is coherent, but attributes to him an analysis of such complexity as to undermine its plausibility. On this reading, Jonas' view is understood as the conclusion of a risk-benefit analysis which compares the potential gains of clinical research to the current burden of disease, now, or at the time at which he was writing, and also at some point in the past. The fact that this calculation would be immensely difficult undermines the suggestion that it is the basis for Jonas' position, particularly given that he never considers the kinds of data that would needed to carry such an analysis through.

A more plausible reading would be to interpret Jonas as arguing from a version of the active-passive distinction. It if often claimed that there is a profound moral difference between actively causing harm versus merely allowing harm to occur, between killing someone and allowing them to die. Jonas often seems to appeal to this distinction when evaluating the ethics of clinical research. The idea is that conducting clinical research involves investigators actively exposing individuals to risks of harm and, when those harms are realized, it involves investigators actively harming them. The investigator who injects a subject with an experimental medication in the context of a non-beneficial study actively exposes the individual to risks for the benefit of others and actively harms, perhaps even kills those who suffer harm as a result. And, to the extent that clinical research is conducted in the name of and for the benefit of society in general, one can say without too much difficulty that society is complicit in causing these harms. Not conducting clinical research, in contrast, involves our allowing individuals to be subject to diseases (that we might otherwise have been able to avoid or cure). And this situation, albeit tragic and unfortunate, has the virtue of not involving clear moral wrongdoing.

The problem with the argument at this point is that the benefits of clinical research often involve finding safer ways to treat disease. The benefits of this type of clinical research, to the extent they are realized, involve clinicians being able to provide less harmful, less toxic medications to patients. Put differently, many types of clinical research offer the potential to identify medical treatments which harm patients less than current ones. This not an idle goal. One study found that the incidence of serious adverse events from the proper use of clinical medications (i.e. excluding such things as errors in drug administration, noncompliance, overdose, and drug abuse) in hospitalized patients was 6.7%. The same study, using data from 1994, concludes that the approved and properly prescribed use of medications is likely the 5th leading cause of death in the US (Lazarou, Pomeranz, Corey 1998).

The normative calculus is significantly more complicated than a reading of Jonas seems to suggest. The question is not whether it is permissible to risk harming some individuals in order to make other individuals slightly better off. Instead, we have to decide how to trade off the possibility of clinicians exposing patients to increased risks of harm in the process of treating them versus clinical researchers exposing subjects to risk of harm in the process of trying to identify improved methods to treat others. This is not to say that there is no normative difference between these two types of harms, but only that that difference is not accurately described as the difference between harming individuals versus improving their lot beyond some already acceptable status quo. It is not even a difference between harming some individuals versus allowing other individuals to suffer harms. The argument that needs to be made is that harming individuals in the process of conducting clinical research potentially involves a significant moral wrong not present when clinicians harm patients in the process of treating them.

Jonas' primary concern is that, by exposing subjects to risks of harm, the process of conducting clinical research involves the threat of exploitation of a particular kind. It runs the risk of investigators treating persons as things, devoid of any interests of their own. The worry here is not so much that investigators and subjects enter together into the shared activity of clinical research with different, perhaps even conflicting goals. The concern is rather that, in the process of conducting clinical research, investigators treat subjects as if they had no goals at all or, perhaps, that any goals they might have are normatively irrelevant.

In Jonas' view, this concern can be addressed, and the process of experimenting on some to benefit others made ethically acceptable only when the research subjects share the goals of the research study. The goals must, to some extent, be their own goals so that, in facing research risks, subjects are working to further their own interests. In this way, ethically appropriate research, on Jonas' view, is marked by: “appropriation of the research purpose into the person's own scheme of ends” (Jonas 1969). And assuming that it is in one's interests to achieve one's, at least, proper goals, it follows that, by participating in research, subjects will be acting in their own interests, despite the fact that they are thereby being exposed to risky procedures which are performed to collect information to benefit others. One might want to add here the further condition that there must be some appropriate proportionality between the risks to which the individuals are exposed and the extent to which furthering, pursuing and perhaps attaining, these goals advances their own interests.

Jonas claims in some passages that research subjects, at least those with an illness, can share the goals of a clinical research study only when the subjects have the condition or illness under study (Jonas 1969). These passages reveal something of the account of human interests on which Jonas' arguments rely. On standard preference satisfaction accounts of human interests, what is in a given individual's interests depends on what the individual happens to want or prefer, or the goals the individual happens to endorse, or the goals the individual would endorse in some idealized state scrubbed clean of the delusions, misconceptions and confusion which inform our quotidian preferences (Griffin 1986).

The question of whether an individual shares the goals of a clinical research study is largely immune on this view of interests to a priori analysis. One needs to ask her, or perhaps study her behavior over an extended period of time. Jonas, in contrast, seems to regard the question, at least to a certain extent, as amenable to conceptual analysis. His general view seems to be that there are objective conditions under which individuals can share the goals of a given research study. They can endorse the cause of curing or at least finding treatments for Alzheimer's disease only if they suffer from the disease themselves. One possible objection would be to argue that there are many reasons why an individual might endorse the goals of a given study, apart from the fact of having the disease oneself. One might have family members with the disease, or co-religionists, or have adopted improved treatment of the disease as an important personal goal.

The larger question here is whether subjects endorsing the goals of a clinical research study is a necessary condition on its acceptability. Recent commentators and guidelines rarely, if ever, adopt this condition, although at least some of them might be assuming that the requirement to obtain free and informed consent will ensure satisfaction of this condition. It might be assumed, that is, that individuals will enroll in research only when they share the goals of the study in question.

Jonas was cognizant of the extent to which the normative concerns raised by clinical research are not exhausted by the risks to which subjects are exposed, but also include the extent to which investigators and by implication society are the agents of the risk exposure. For this reason, he recognized that the libertarian response is inadequate, even with respect to competent adults who truly understand. Finally, to the extent Jonas' claims rely on an objective account of human interests, one may wonder whether he adopts an overly restrictive one. Why should we think, on an objective account, that individuals will have an interest in contributing to the goals of a given study only when they have the disease it addresses? Moreover, although I will not pursue the point here, appeal to an objective account of human interests raises the possibility that one might defend the process of exposing research subjects to risks for the benefit of others on the grounds that contributing to valuable projects, including presumably some clinical research studies, is objectively in (most) individuals' interests.

8. Contract theory

A few commentators (Caplan 1984; Harris 2005; Heyd 1996) have considered the possibility of justifying the exposure of research subjects to risks for the benefit of others on the grounds that there is an obligation to participate. One might try to ground this obligation in the fact that current individuals have benefited from clinical research conducted on individuals in the past. At least all individuals who have access to medical care have benefited from the efforts of previous research subjects in the form of effective vaccines and better medical treatment.

Current participation in clinical research typically benefits future patients. However, if we incur an obligation for the benefits we have received from previous research studies, we presumably are obligated to the patients who participated in those studies, an obligation we cannot discharge by participating in current studies. This approach also does not provide a way to justify the very first clinical trials, such as Lind's, which of necessity enrolled subjects who had never benefitted from previous clinical research.

Alternatively, one might argue that the obligation does not trace to benefits the individuals in fact received from the efforts of previous research participants. Rather, the obligation is to the overall social system of which clinical research is a part (Brock 1994). For example, one might argue that individuals acquire this obligation as the result of being raised in the context of a cooperative scheme or society. We are obligated to do our part within the scheme because of the many benefits we luckily have enjoyed as a result of being born within such a scheme.

The first challenge for this view is to explain why the mere enjoyment of benefits, without some prospective agreement to respond in kind, obligates individuals to help others. Presumably, your doing a nice thing for me yesterday, without my knowledge or invitation, does not obligate me to do you a good turn today. This concern seems even greater with respect to pediatric research. Children certainly benefit from previous research studies, but typically do so unknowingly and often involuntarily. The example of pediatric research makes the further point that justification of non-beneficial research on straightforward contractualist grounds will be difficult at best. Contract theories have difficulties with those groups, such as children, who do not accept in any meaningful way the benefits of the social system under which they are living (Gauthier 1990).

In a Rawlsian vein, one might try to establish an obligation to participate in non-beneficial research based on the choices individuals would make regarding the structure of society from a position of ignorance regarding their own place within that society, from behind a veil of ignorance (Rawls 1999). To make this argument, one would have to modify the Rawlsian argument in several respects. The knowledge that one is currently living could well bias one's decision against the conduct of clinical research. Those who know they are alive at the time the decision is being made have already reaped many of the benefits they will receive from the conduct of clinical research.

To avoid these biases, we might stretch the veil of ignorance to obscure the generation to which one belongs—past, present or future (Brock 1994). Under a veil of ignorance so stretched, individuals might choose to participate in clinical research, including non-beneficial research as long as the benefits of the practice exceed its overall burdens. One could then argue that justice as fairness gives all individuals an obligation to participate in clinical research when their turn comes. This approach seems to have the advantage of explaining why we can expose even children to some risks for the benefit of others, and why parents can give permission for their children to participate in such research. This argument also seems to imply not simply that clinical research is acceptable, but that individuals have an obligation to participate in it. It implies that adults are obligated to participate in clinical research, although for practical reasons we might refrain from forcing them to do so.

Several problems with this approach would need to be addressed. First, Rawlsian arguments typically are used to determine the basic structure of society, that is, to determine a fair arrangement of the basic institutions in the society (Rawls 1999). If the structure of society meets these basic conditions, members of the society cannot argue that the resulting distribution of benefits and burdens is unfair. Yet, even when the structure of society meets the conditions for fairness, it does not follow that individuals are obligated to participate in the society so structured. Competent adults can decide to leave a society that meets these conditions (whether they have any better places to go is another question). The right of exit suggests that the fairness of the system does not generate an obligation to participate in the system, but rather defends the system against those who would argue that it is unfair to some of the participants over others. At most, then, the present argument can show that it is not unfair to enroll a given individual in a research study, that this is a reasonable thing for all individuals, including those who are unable to consent.

Second, it is important to ask on what grounds individuals behind the veil of ignorance make their decisions. In particular: are these decisions constrained or guided by moral considerations? (Dworkin 1989; Stark 2000). An obvious response is to think that the decisions would be constrained in this way. After all, we are asking what is the ethical approach or policy with regard to clinical research. The problem, then, is that the answer we get in this case may depend significantly on which ethical constraints are built into the system, rendering the approach question begging. If we include the oft-endorsed constraint that it is unethical, even for a good cause, to expose to risks those who cannot consent, the policy chosen from behind the veil of ignorance would be one that prohibits at least non-beneficial pediatric research, as well as non-beneficial research with incompetent adults.

Proponents might avoid this dilemma by assuming that individuals behind the veil of ignorance will make decisions based purely on self-interest, unconstrained by moral limits or considerations. Presumably, many different systems would satisfy this requirement. In particular, the system that produces the greatest amount of benefits overall may well be one that we regard as unethical. Many take the view that clinical research studies which offer no potential benefit to subjects and pose a very high risk of death are unethical, independent of the social value to be gained from the study. Even if a study which infects a few subjects with AIDs has the potential to identify a cure for AIDS and thus may well provide a positive cost-benefit ratio, it would be unethical.

Rawls famously argued that rational self-interested individuals behind the veil of ignorance would adopt a maximin strategy— ensuring that those who occupy the worst position in the future society are as well off as possible. Even accepting the maximin approach, and bracketing the challenges that have been advanced against it, may not solve the problem. The question here is not whether a reasonable person would choose to make the poor even worse off in order to elevate the status of those more privileged. Rather, both options involve some individuals being in unfortunate circumstances, namely, infected with the HIV virus. The difference is that the one option (not conducting the study) involves many more individuals becoming infected over time, whereas the other option involves significantly fewer individuals being infected, but some as the result of being injected with HIV in the process of identifying an effective vaccine. Since the least desirable circumstances (being infected with HIV) are the same in both cases, the reasonable choice, even if one endorses the maximin strategy, seems to be whichever option reduces the total number of individuals who are in those circumstances, revealing that, in the present case at least, the Rawlsian approach seems not to take into account the way in which individuals end up in the positions they occupy.

9. Industry sponsored research

The fundamental ethical challenge posed by clinical research is whether it is acceptable to expose some to research risks for the benefit of others. In the standard formulation, the one we have been considering to this point, the benefit that others enjoy as the result of subjects' participation in clinical research is medical and health benefits, better treatments for disease, better methods to prevent disease.

Industry funded research introduces the potential for a very different sort of benefit and thereby potentially alters, in a fundamental way, the moral concerns raised by clinical research. Pharmaceutical companies typically focus on generating profit and increasing stock price and market share. Indeed, it is sometimes argued that corporations have an obligation to their shareholders to pursue increased market share and share price (Friedman 1970). This approach may well lead companies to pursue new medical treatments which have little or no potential to improve overall health and well-being (Huskamp 2006; Croghan, Pittman PM 2004). “Me-too” drugs are the classic example in this regard. They are drugs identical in all clinically relevant respects to approved drugs already in use. The development of a me-too drug offers the potential to redistribute market share without increasing overall health and well-being.

There is considerable debate regarding how many me-too drugs there really are and what is required for a drug to qualify as effectively identical (Garattini 1997). If the existing treatment needs to be taken with meals, but a new treatment need not, is that a clinically relevant advance? Bracketing these questions, a drug company may well be interested in a drug which clearly qualifies as a me-too drug. The company may be able, by relying on a savvy marketing department, to convince physicians to prescribe, and consumers to request the new one, thus increasing profit for the company without advancing health and well-being.

The majority of clinical research was once conducted by governmental agencies, such as the U.S. NIH. It is now estimated that a majority, perhaps a significant majority of clinical research studies are conducted by industry: “as recently as 1991 eighty per cent of industry-sponsored trials were conducted in academic health centers…Impatient with the slow pace of academic bureaucracies, pharmaceutical companies have moved trials to the private sector, where more than seventy per cent of them are now conducted (Elliott 2008, Angell 2008, Miller and Brody 2005). Moreover, during the very early years of the 21st century, the research budget of the US NIH, likely the largest governmental sponsor of clinical research in the world, has declined” (Mervis 2004; Mervis 2008).

In addition to transforming the fundamental ethical challenge posed by clinical research, industry sponsored research has the potential to transform the way that many of the specific ethical concerns are addressed within that context. Commentators on the ethics of clinical research tend to be skeptical of the appropriateness of paying research subjects, despite the prevalence of the practice, on the grounds that it might undermine the ethical protection of free and informed consent (Grady 2005). The concern is that the offer of payment may cloud individuals judgment to the extent that they end up temporarily overwhelmed by the promise of profits and make a decision contrary to their long-terms interests (Macklin 1981).

Insulating the review, conduct and reporting of clinical research trials from the influence of money also is regarded as important for investigators and funders. The possibility that investigators and funders may earn significant amounts of money from their participation in clinical research might, it is thought, warp their judgment in ways that conflict with appropriate protection of research subjects (Fontanarosa, Flanagin, DeAngelis 2005). When applied to investigators and funders this concern calls into question the very significant percentage of research funded by and often conducted by for-profit organizations. Skeptics might wonder whether the goal of making money has any greater potential to influence judgment inappropriately compared to many other motivations that are widely accepted, even esteemed in the context of clinical research, gaining tenure and fame, impressing one's colleagues, winning the Nobel Prize.

Financial conflicts of interest in clinical research point to a tension between relying on the profit motive to motivate business and insulating drug development and testing from the profit motive to protect research subjects and future patients (Psaty, Kronmal 2008). To this extent, financial conflicts may not be amenable to the commonly pursued remedy of addressing ethical concerns in clinical research by promulgating a few new guidelines. While more fundamental changes may be necessary, it is not clear as I write what changes would be sufficient to address the concern, much less how likely they are to be adopted.

Finally, if industry investigators and companies make hundreds of millions of dollars from the development of a drug one wonders what constitutes an appropriate response to the subjects who were vital to the development of the drug in question. On a standard definition, exploitation occurs when some individuals do not receive a fair level of benefits from a shared activity (see entry on exploitation). A series of clinical research studies can result in a company earning billions of dollars in profits. Recognizing that a fair level of benefit is a complex function of participants' inputs compared to the inputs of others, and the extent to which third parties benefit from those inputs, it is difficult to see how one might fill in the details of this scenario to show that the typically minimal, or non-existent compensation offered to research participants is fair. At the same time, addressing the potential for exploitation by offering payments to research participants would introduce its own set of ethical concerns: is payment an appropriate response to the kind of contribution made by research participants; might payment constitute an undue inducement to participate; will payment undermine other participants' altruistic motivations? In the end, then, as commentators struggle to address the existing ethical concerns raised by clinical research, its conduct in the real world raises new ethical concerns which await analysis and resolution.

Bibliography

Other Internet Resources

Related Entries

cloning | contracts, theories of | decision-making capacity | exploitation | health | informed consent | original position | paternalism | Rawls, John | risk