Earlier this year, an elite group of scientists and ethicists—including Nobel Laureate David Baltimore, president emeritus and Robert Andrews Millikan Professor of Biology at Caltech—convened in Napa, California, to discuss the scientific, medical, legal, and ethical implications of genome engineering technology.
Such technologies—chief among them a now-widespread genetic tool known as CRISPR-Cas9, known colloquially as "DNA scissors"—allow scientists to make precise edits to the genome, or the entire genetic script, of an organism. By essentially rewriting genomes, researchers can, in weeks rather than years, create animal strains that mimic human diseases to test new therapies; easily knock out genes in the cells of animals and humans to test their function; and even change DNA sequences to correct genetic defects. Such edits can be made in both body cells and in germ-line cells (sperm and eggs), to alter heritable genes.
We recently spoke with Baltimore about these new technologies and the issues they raise.
What was your motivation for participating in this conversation in January about the uses of genome engineering technology?
I was most concerned about the ability to carry out germ-line modifications of humans using this technology. Other issues came up—modification of the general biosphere, somatic gene therapy as opposed to heritable gene therapy—but I think those things are less concerning at the moment.
What is the big issue with human germ-line modification?
The big issue is how simple it is, at least conceptually, to modify cells—embryonic stem cells as well as somatic cells. The major concern is the potential for off-target effects: If you carry out the germ-line modification of a gene that you have identified as of concern, how do you know that, somewhere else in the genome, there hasn't been an alteration which you didn't plan to do but that has occurred anyway? Most of the genome is not coding—it doesn't code for anything. So you wouldn't necessarily see a protein change. But that change would become heritable generations into the future. You want to be pretty sure that that is not happening.
We know that people have put a lot of effort into minimizing such off-target effects. Whether they have been minimized enough is a very important safety consideration.
Are you and your colleagues concerned about the potential for using this technology to create "designer" babies?
I think the thing to do is to distinguish between the long-term concern about modifications that are heritable but made for reasons that are "cosmetic," and a situation in which a modification is made in order to ameliorate a serious human disease.
The example that I find most compelling is Huntington's disease. It involves a mutation in the genome that most people don't carry; the few people who do carry it suffer very serious deleterious consequences that only become apparent with age. Ridding the genome of that modified gene seems to me to be an unalloyed good. Therefore, the question becomes, do you need to use genome alteration technology to accomplish that end or is there some other way to accomplish that? But the end seems to me to be something almost everybody would agree is a good.
But there are situations that are not that clear-cut . . .
Exactly. You go from, on one side, Huntington's disease, and on the other side, the desire for a more intelligent child. One is easy, it can be fixed by changing a single gene. The other is much more complicated. Intelligence certainly isn't determined by a single gene. It is multigenic—the result of many genes. One is a pretty straightforward medical decision; the other is an issue which is very culturally bound. So those are the two poles, and then there is everything in between.
For the in-between situations, that is just a judgment call?
Yes, it is a judgment call.
Who makes the decisions in those cases?
Society, in the end, will make those decisions. The problem that I think everybody has with it is that although society has the ability to make decisions like that, it is a big world. And you could imagine things being done in other jurisdictions, where we don't have control.
How do we manage that?
My personal thought is that the best we can do is to make absolutely unambiguous the consensus feeling of society. Because the scientific community is an international community, we do have the ability to at least provide moral guidelines.
Any kind of modification that involves something as elusive as intelligence is a long way off. We don't understand it well enough to make modifications today, and so to an extent we are trying to establish a framework that will serve the world well into the future. That is a big order, and whether an international meeting can grapple with anything as profound as that, we will see.
Where do you see this technology in 10 years? 100 years?
That is a good distinction—10 years versus 100 years. The latter is very hard to think about, because we have really no idea what scientific advances are going to be made in the next 100 years. About all we can be sure of is that they will be impressive and maybe revolutionary, and will present us with a very different technological landscape in which these questions will evolve.
In 10 years, we certainly are likely to know the outline of what we are likely to see, and it is not going to be a whole lot different from what we are seeing today. I would guess that in 10 years, we would understand multigenic traits better than we do now. I do suspect that people will be gratified that at this time we began the basic considerations, because the problems will get more difficult rather than easier.
Forty years ago, you were one of the organizers of the influential Asilomar Conference on Recombinant DNA, which laid out voluntary guidelines for the use of genetic engineering—the same type of guidelines you and your colleagues are advocating for now with genome engineering. What was the original inspiration for convening the Asilomar Conference?
It was the advent of recombinant DNA technology that drew our attention. We all worked in the biological sciences. We recognized that recombinant DNA technology was a game changer because it was going to allow scientific investigation of the questions that heretofore had been unavailable. In some ways, many of us had designed our careers around the inability to do this kind of work, and, suddenly, we were going to be able to do things that we had only previously dreamed about, if we had considered them at all.
But at the same time, there seemed to be potentially problematic aspects to it, in particular the ability to modify organisms, mainly microbial organisms, in ways that could have given the organisms the ability to be a danger to human health.
Actually, we simply did not know whether that was a realistic concern or not. As we talked to other people, we discovered that no one knew. So it seemed like a good idea to take a breather and to give consideration to these concerns of potential hazards in an international meeting that would be convened in the United States.
Was there some thought that if you tried to self-regulate you could avoid governmental regulation?
It wasn't a matter of avoiding governmental regulation. It was that we thought that we—the scientific community—were uniquely capable of putting in perspective these new capabilities. The answer might have been to have legislation. In fact, as our thinking progressed, we realized that the very best situation would be to avoid legislation because legislation is very hard to undo. We wanted to be sure we would have the flexibility to respond to inevitably changing scientific perspectives.
In retrospect, do you think Asilomar was a success?
It worked out very close to how we hoped it would. That is, as we learned more, we became more comfortable with the technology; as we investigated potential hazards, we saw less and less reason to be concerned; and we had a built-in flexibility in the system to allow it to evolve in the context of newer understanding.
Are you aware of any situations where scientists did not follow the rules?
To my knowledge, that has never happened.