Q&A

“You need to engage the ethical question all along the way”: A Q&A with Paul Root Wolpe

Posted by:

Yesterday, TED posted a jaw-dropping talk from Paul Root Wolpe that rounds up new bio-engineering experiments, most involving animals, that push our current ethical systems to their limits. From bioengineered glow-in-the-dark pets to supersized salmon, from ratbots to the mouse that grows a human ear — Wolpe asks us to think deeply about what bioengineered animals mean. The TED Blog talked with Wolpe yesterday and asked a couple of followup questions on how we might frame these brave new ethical dilemmas.

Can you build a little on your closing statement: How would you frame a response to the ethical questions raised by this new bio-technology? Where do we start?

It’s one of the things that ethicists do. People in this field have been talking about the question of how you create ethical standards around biotech for a very long time. It’s a well-developed field, and though there are disagreements, there are also really strong agreements about harms and how you determine harms, risk and how you determine risk. The first and most important part is actually framing the ethical question. What is really the question we’re asking here?

If you take, for example, animal manipulation — we’ve been using and breeding animals for millennia. And if you go to a chicken farm and see how the animals we eat are treated, you wonder, why are we worrying about rats in a lab? Also, we have ethical rules for the treatment of lab rats, but these are the same rats that you’re allowed to poison and snap-trap in your basement. We have conflicting ideas on how we treat animals depending on the context. Which is difficult for people who want very clear standards around animal use — they just aren’t there. There are different groups that are starting from different ethical assumptions, both defending their ethical perspective, and there isn’t yet consensus.

On the other hand, there are clear ethical violations that we tolerate. Think of the way we treat chickens on chicken farms that we’re going to eat. I don’t know anyone who takes an ethical stand on this who thinks the way they are treated is right, and yet we continue to do it.

What would your response be to Sanjiv Talwar‘s grad student, who asked if it was ethical to take away an animal’s agency?

Well, the answer I would give is this: There is nothing intrinsically wrong in using animals for jobs. We ride horses, we plow with oxen, we have working dogs. The question is, how do we encourage to the degree possible the flourishing of that creature in the context of the job it has to do?

I don’t think the experiment itself is intrinsically wrong. One can think of ethical uses for it. If a building collapses and there may be people alive in there, and you could send in a ratbot with a camera on its head that could explore the rubble looking for signs of life, that is a justifiable use of an animal like this. The question becomes, first of all, is the technology being used for its own sake — are we doing it just to see if we can do it? And b, how are the animals treated, and to what use are they put once the technology is in place? There are a series of question one has to answer in the process.

What I do give Talwar credit for is raising the question and discussing it. You need to engage the ethical question all along the way. Do you ask the questions, do you engage them seriously, and if the answer is, ‘We have serious questions,’ do you override them — or do you change your model to accommodate the ethical problems? And if you can’t resolve the ethical dilemma, do you not do the experiment? That’s what real ethical engagement requires. And if you get to a point in the conversation where there is no ethical consensus, that’s when you call in an expert in ethics.

I wrote a piece once on the eight reasons why scientists don’t think about ethics — and the main one is, this tendency to think other people will handle the ethics piece. Yet ethics is everyone’s responsibility, we all ultimately must assume responsibility for the ethics of the activities we engage in.

What kind of ethical review happens before you start working on projects like this?

Well, every research institution has IRBs (institutional review boards) that oversee all research on humans, and the Institutional Animal Care and Use Committee (IACUC) that oversees animal research as well as their housing, feeding and maintenance. All these experiments can be and must be reviewed to see if it conforms to regulations and to basic principles of ethics. But there are ethical questions that are not really an IACUC’s job to answer, and those are the kinds of calls that I do get, from someone who’s got a deeper or subtler ethical question. Not about the care of the animal, but about the rightness of an experiment. Should we be doing this in the first place? Is this the best use of resources? What do we mean by “harm” – how far should we interpret the concept? You use the ethicist for the tough ones.

How can a nonscientist make a difference in this debate?

I think most people engage in ethical activity all the time. People have very good instincts about things that are ethical or not ethical. People engage in ethics when they think about the kind of food they buy or other activities — do they buy free-range chicken eggs or regular eggs, are they vegetarians or meat-eaters, do they buy pets from puppy mills or the SPCA. People make ethical decisions in almost everything they do, but most people aren’t conscious about the nature of the ethical decisions they make. One of the most important things is just to be conscious that the decisions you make every day have an impact on perpetuating things you think are immoral or moral. The consumer has a big impact.

You can also be a vocal advocate for the ethical positions you take. There are lots of groups that are advocating for different positions on a variety of social and ethical issues, in biotechnology, ecology, geopolitics, poverty, you name it. Support or join a group and engage in the ethical debate.

I really believe that the debate itself is the more important thing, more important than even the outcome. By showing how important ethical decisions are, by engaging in the conversation, we elevate ethical sensitivity.

And in doing so, we also address the powerlessness we can feel in the face of these big changes.

I think, though, that the powerlessness we feel is illusory. We make decisions every day that express our power. Recycling is a wonderful example of this. When people first started community-based recycling in this country, they thought it would take 10 or 15 years to catch on, that most people wouldn’t do it, wouldn’t bother. Two years later, they found that people took to recycling so quickly and so powerfully that recycling centers were overwhelmed. People were ready to do something at that cultural moment.

The same energy is in the culture now around biotech. People come up to me and express this all the time. There’s a cultural momentum right now that people can tap into. There are many, many different ways for people to express their concern and exercise their power over these technologies, just as we have over issues of environementalism and sustainability. There is not a major company or university now that doesn’t have a program in environmentalism, at sustainability. To me, that’s incredible. I remember corporate America resisting it with all their power. You know, “Nuke the Whales.” And now look. Here in Atlanta, Emory is working with Coke. Coke wants to be a zero-net-water-usage company. They want to return to the environment all the water they use. And that wouldn’t have been their goal 15 years ago. And that all comes from the public sensibility, which has made sustainability and environment such a powerful ethic that corporations can no longer ignore it.

I’m part of the Savannah Ocean Exchange; we’re trying to find innovative ways to restore and protect the Southeastern coast of the US. And Coke is a partner, UPS is a partner, Gulfstream is a partner. It is a new model, because the environmental groups aren’t demonizing the corporations anymore. Calling each other names is not the way to get anything done. In talking with Coke, they have understood what every big company needs to understand — that the developing country of today is tomorrow’s Coke drinker, and by sustaining and promoting, they’re promoting tomorrow’s consumers. I think that’s also part of that general shift in the way we think about these things.

Many of the technologies you mention in your talk have a military application. Should we be questioning them differently?

The question is, are they different? They’re not different in kind but may be slightly different in degree. The question of military ethics is in some ways a specialized field, but military and defense needs do not trump ethical reflection. And it isn’t that, because this is military, everything is OK. From the Bible on, there are discussions about standards of war ethics, there are people who talk about how we make ethical decisions about combat and war itself. It’s actually a very modern idea that all is fair in love and war. Throughout history, there were very powerful ideas about what is ethical and unethical in war, and about the ethics of soldier conduct.

I can illustrate this with a story. I grew up in Harrisburg, Pennsylvania, and we used to go to the Gettysburg battlefield all the time. One time, I was at Gettysburg listening to one of the guides who was standing next to a cannon, talking about the use of cannons in the Civil War. And he said, “Do you know, if I put you out in that field right now, I could shoot 100 cannonballs at you and I could never hit you. Because you can stand there and see that cannonball coming and just step to the side. It’s like a softball coming at you. You just step to the side.” So I asked the guide, “Then how did people get killed by cannonballs?” And the guide said: “It was considered cowardly to step to the side.” People let themselves be hit by cannonballs because it was part of the ethics of war.

What can you tell me about your work with NASA — is there ongoing research that you’re interested in?

My basic job there is to look at two things: research on astronauts in space, and also, NASA does a lot of ground-based research. For instance, astronauts have serious bone loss problems, so they do ground-based studies on osteoporosis. I look at problems in space-based clinical care. For instance, living in zero gravity creates metabolism problems, so drugs work very differently in space. How do we respond?

And there are ethical problems with long-duration space flight (which until recently was very much on the mind of NASA). Even on the shorter flights, astronauts still get an enormous dose of radiation. They exceed the daily allowable dosage; they just don’t let them exceed the lifetime dosage.

What are you working on now at Emory — what’s an issue that should be on people’s radar?

The thing I’m working on most closely is the question of ethics in neuroscience. I’m the editor of The American Journal of Bioethics Neuroscience, and we’re doing a number of interesting things there; I was just working with a student on a paper about the question of prodromal mental illness, especially schizophrenia. The idea is, we’re beginning to get better and better at diagnosing young people, sometimes as young as 6 but mainly in the early teens, and being able to separate out those who are at higher risk of eventually developing mental illness. Then the question becomes: What do you do about it? The average chance of someone becoming schizophrenic is about 1%. So if we find that this person is at a 10% risk, what do you do about that? Do you put them on meds that are prophylactic, that we hope might keep them from developing schizophrenia? These meds are not pretty meds, they have all kinds of very problematic side effects. And the person is still 90% likely not to be schizophrenic. As we get better and better at prediction, without having the therapies to prevent the disease … there are all these questions of bioprediction.

I’m also doing some work on synthetic biology. I testified in front of the President’s Commission for the Study of Bioethical Issues this past summer on synthetic biology. I’m especially interested in the midrange ethical issues. There’s the basic ethical issues of ‘Are we doing harm,’ but there are also midrange issues that aren’t talked about.

Such as the issue of speed. This comes up, for instance, in selective breeding. If you’re doing traditional selective breeding, it takes many generations of offspring to get to a target state, and during that time there’s an opportunity for reflection, self-correction, if you see it going off in the wrong direction. It has a built-in corrective component, and that is time. But if you can do it in one generation, through genetic modification, you’ve eliminated all those self-correcting steps. Skipping over time and multiple steps with that technology means we have to be much more cautious and thoughtful.

Another is incrementalism: When we’re developing a technology, we start at Point A and then go to Point B and to C, and we may end up at M, where no one wants to be. Let’s say we all know we don’t want to get to M. But where do you put the brakes on? There is no qualitative jump where you can say, “We need to stop at this point.” Because everywhere along the line, someone can say, “But there’s really no difference today to what we were doing yesterday.”

The solution has to be incentive-based. There’s no good place to stop certain kinds of scientific progress, but what you do is incentivize people to get where you want them to go. That’s what we do with the NIH, the NSF — set goals that drive research in the right direction. So — they’re not the mega-issues like harming people, or the micro-issues of how to do an individual experiment. It’s the midrange issues of process and scientific progress, how the enterprise as a whole moves in an ethical way, and asking pragmatic questions about that.

Photo: kimkochphotos.com