From Cryonics, December 1985 & January 1986
By Eric Drexler
The following paper was presented at the 1985 Lake Tahoe Life Extension Festival on May 25, 1985.
My talk this afternoon is on cell repair machines, life extension, cryonics and the relationship among them. In the course of this I will be describing a technology that seems to be clearly able to make it possible for people to live indefinitely in perfect health. In doing this, I didn’t begin by thinking: “Gee, wouldn’t it be nice if we had a technology that would allow people to live indefinitely in perfect health? Now, how would we do that?” Instead, I was thinking as an engineering-oriented person at MIT looking at the future of technology, trying to see what it would be possible to build with tools we do not have yet, but tools that we understand, that we will have economic incentives to develop, and that we will therefore eventually be using.
One area that I examined for a good number of years was space industrialization, the field I did my graduate work in. There, I looked at what we could build with spacecraft and systems of hardware easy to understand, but located in space. I also kept track of molecular biology, thinking “This looks interesting — it’s a cutting-edge field — I’d better keep track of what’s happening in it.” Being an engineer, I increasingly thought in terms of, “Well, gee, they’re describing molecular machines here. What can we do with these molecular machines?” And that is what eventually led to the conclusions that I stated earlier.
Part of what this means is that nanotechnology, or molecular technology (since we speak of micro-technology or micro-circuits when we have micron wide lines on silicon chips, it’s reasonable to speak of nanotechnology when we’re talking about things on the nanometer scale), is a field in itself. And, I think this will turn out to be a very important from the point of view of selling life extension and specifically, of selling the idea of cryonics. Now, the reason for this: at present we’re in a position (or have been in a position) of taking a direct approach. “You want to avoid death, don’t you?” is, in effect, what you say to someone. And, with respect to cryonics, you say, “Well we’ve got an approach that just might work.”
You’re asking people to risk getting their hopes up about something of fundamental emotional concern to them. This is something which much of human culture revolves around and is adapted to — this idea of the inevitability (and historically, it has been inevitable) of personal death. People have adapted for good evolutionary reasons. People in the past who said, “I’m going to try to find some way to avoiding dying” were wasting their time. They didn’t do as well economically. They didn’t do as well in any competing activity. Evolution selected against people who had brains that tended to think that way. It selected against cultural patterns that would encourage people to do what, at that time, was useless. So the real reason for what earlier speakers have described as “deathism” is actually an “evolutionary adaptation” that was appropriate from an evolutionary standpoint. Of course, evolution is not necessarily good, so we needn’t like all of its products.
So, I think this goes some distance toward explaining the well-known phenomenon of massive resistance when we approach people with the idea of radical life extension. Well, what I’m going to be outlining will make possible another approach, an indirect approach, for selling the idea of cryonics, because the conclusions that make cryonics seem reasonable fall out of the broader field of nanotechnology. The field of nanotechnology turns out to raise more conventional sorts of life-and-death issues, such as avoiding getting killed as opposed to avoiding aging and death. Since it raises these issues, it’s full of hooks that grab people, interest people, and that don’t directly have anything to do with cryonics or life extension. But it turns out that the set of ideas they’ve become interested in involves radical life extension as a natural consequence. So, we have an indirect approach to the idea. Military strategists will tell you that indirect approaches are a marvelous thing and some military strategists will also tell you that they apply to the world of the mind.
I’ve decided to structure my talk in a way that illustrates this. The first segment will be on nanotechnology in general and I’ll say nothing about life extension. In the course of this, if you imagine that you didn’t come to a life extension conference and instead were interested in space, computers, the future, technology, science, and so forth, I think you’ll find a bunch of things that are just intrinsically interesting. In the second section, I’ll discuss some of consequences for life extension. In the third part I’ll talk about how this applies to cryonics.
The first part is about nanotechnology. I’ve given several of these talks lately to space audiences, and in them I’ll say, “Well, scenarios for future space development spread across these decades, and NASA says they might give you a better space shuttle here, and a space station there, and a better deep-space transportation system here.” And this makes up the conventional scenario for space development that doodles off into the middle decades of the next century with us just beginning to get a real toehold for civilization in space.
And then I proceed to say to them, “Ah! But there are things that we can foresee right now that will change that scenario.” (I talk about this to soften them up a little bit, and because it turns out to have relevance to nanotechnology and how fast it will advance.) Then I say a few things about computer-aided design and robotics and computer-aided manufacturing and automated engineering. I discuss how efforts like the Strategic Computing Initiative (which is having almost a billion dollars poured into it) and the Japanese Fifth Generation Project will combine with industrial computer-aided design to give us machines which will help us design things more swiftly. And robotics will give us machines which will help us build the things we design. I then say, “Well, this takes this future scenario (trailing off across the decades) and shortens the design cycle times and smashes the whole thing down to a fraction of the time.” And then I argue that probably out around 20 years, plus or minus 10, is when this “crunching factor” will start.
And then I say to them, “But this all relies on very conventional technology. It’s just putting together widgets and materials we already know about in new ways, using no fundamentally new kinds of hardware. But there’s another revolution brewing, which is going to lead to new kinds of hardware. Computer-aided design is going to speed this revolution as well, and that revolution lies in the area of molecular technology.”
I then ask them to put on the first slide [DNA slime being pulled out of a beaker]. This rather disgusting looking substance is DNA. People now know how to make DNA molecules of any sort you want. You type out the sequence of nucleotides you want in your DNA molecule, you go to the gene synthesis machine, make little segments that correspond to the parts you want, patch them together, and (with sufficient time and money), make any kind of DNA you want. But why bother? DNA isn’t good for anything directly. But what you can do with it is to put it into this appetizing-looking substance here [slide of tan paste on a spatula] which is a solid mass of E. coli bacteria, the result of running a whole bunch of culture media through a centrifuge. You can program bacteria with your DNA. The DNA gets transcribed to RNA, the RNA gets fed through molecular machines called ribosomes, and then a molecular matching process leads to synthesis of ever-longer protein chains, a bunch of amino acids stuck together to make a unique protein sequence. This sequence acts like a numerically-controlled machine tool, where you feed in a “tape” (DNA) which directs the manufacture of a “thing” (protein), a chain of amino acids which in fact folds up to form an object of a particular size and shape, with particular mechanical and other properties.
This picture represents a small protein adhering to a DNA molecule at a specific location, and in a very specific way. Their bumps and hollows match and the pattern of their electrostatic charges match — where there’s a positive charge on the DNA there tends to be a negative charge on the protein. “Lock and key” is one analogy often used to describe this kind of molecular fit. What this illustrates is that if you can make these molecular objects, you can also get them to stick together in specific, controlled ways. If you make them right, they will stick together right.
This is illustrated even more dramatically by this, which looks like something out of an industrial small parts catalog, but which is, in fact, a virus. This is a T4 bacteriophage. All of the structure that you see here is made up of protein molecules. The head contains DNA. It turns out that you can take these things apart into subunits and the subunits will reassemble. In fact, you can take them apart into their constituent protein molecules, and these proteins will self-assemble in solution. You put them together in a test tube under the right conditions, shake them up, and you get assembled subunits. You take these subunits and put them together in the right sequence, shake them up, and you find the pieces of molecular hardware self-assembling out of solution. The pieces go together to form a working infectious virus particle.
The virus is a piece of molecular machinery. The tail fibers can recognize the surface of a bacterium and grab onto its surface. This “end-plate” comes down to the surface, it cuts a hole through the cell wall of the bacterium, the sheath collapses, this part gets jammed into the bacterial wall, and the DNA molecule is injected into the cell where it proceeds to take over the molecular machinery of the cell, directing it to produce more of these damn viruses, and then the cell bursts and you have more of these viruses around, and pretty soon they’re all over the place. (Since they attack bacteria, this illustrates that even germs get sick. This is, perhaps, somewhat heartening, depending on your perspective.)
What this next slide illustrates, in a very simple and direct way — just by pointing to a few things in nature and then picking up a few other examples here and there — is that there are a wide range of molecular devices which are found in nature. Molecules have a size, a shape, a fairly well-defined surface, mechanical properties, and a distribution of mass. They can act as moving parts. The connection between two segments of a molecule, if made with the right kind of bond, lets them rotate quite freely; it turns out that a sigma bond can make a good rotary bearing. If you examine how bacteria manage to swim through water despite the fact that they’re basically just little rigid boxes, it turn out that the little rigid box has, coming out of it, a little rigid corkscrew. And at the base of the corkscrew, where it meets the box, there is a device that turns out to be a reversible, variable-speed motor that drives the corkscrew as a propeller. Enzyme systems, which pass molecules from one enzyme to another by diffusion, act as production lines; a sequence of operations by machines takes thing apart and puts things together and ends up with a molecular product. What all this shows is that there is a path that leads to molecular machinery, a path that involves learning to design protein molecules. Other paths seem possible. But this one is easiest to explain. And, because of the wealth of natural examples, it leads to a solid case for the feasibility of molecular machines.
If you look at the genetic system — including the ribosomes at the far end of it which actually produce the proteins according to the instructions that ultimately come from the DNA — it can be described as a numerical control system much like the early numerically-controlled machine tools developed in the 1950s. So, on a molecular scale, we find all sorts of machines. What this shows is that there is a path that leads to molecular machinery in which we learn how to design protein molecules.
In fact, we can make any protein molecule that we want right now, it’s just that we don’t know which ones to want. If you ask for a specific amino acid chain, a genetic engineer will say, “Okay, we know how DNA directs the construction of proteins; we’ll just synthesize a DNA molecule that will direct the synthesis of this amino acid chain. We’ll make that DNA and stick it in a bacterium, and we’ll get what we want.” The only problem is that, unless we design it properly, it’ll fold up into some shape that isn’t what we want. Getting these things to fold up in a very specific way is something that has not been tried much until recently, partly because biochemists were confusing science with engineering — confusing the problem of predicting natural folding with that of designing something that will fold predictably.
But progress is being made and there have been a number of review articles lately that talk about enzyme engineering, the steps that have been taken in that direction, and what the prospects are. The people in the field are saying, “How long will it be before we’re able to design protein molecules from scratch — 10, 15 years? Perhaps not that long.” That’s a close paraphrase from a review article that appeared in Applied Biochemistry and Biotechnology a couple of years ago, authored by a researcher at Genentech.
So people are learning protein design. When we get good at it, this will enable us to build protein machinery. We’ll be able to make the sorts of things we see in the cell, including complex machines. But instead of relying on an evolutionary mechanism based on random mutations to produce things, we’ll be using an evolutionary mechanism in which engineers vary and select ideas in their heads, come up with plans to design pieces that fit into the overall concept, then get everything together, debug it, and make it work. So we can build kinds of molecular machines that won’t happen in nature — say, a miniature player piano, for example. (It won’t sound like a piano, but it could go through all the motions!) Charles Babbage, in the middle of the last century, came up with an apparently workable design for an entirely mechanical computer — a programmable computer, all out of brass and gears. Well, you can also make mechanical computers on a molecular scale, though you probably won’t want to make them out of proteins.
Once you have any kind of molecular machine that does a half-decent job at taking reactive molecules and bringing them up to a surface in a controlled position and orientation, then you’re in a position to make reactions happen just where you want them. Today, chemists must shake a bunch of stuff together in a liquid. The molecules diffuse around and bump every which way, making it difficult for chemists to get reactive molecules to stick together in complex patterns. But with molecular machines, we can avoid these problems by just putting reactive molecules in the right place and thereby getting control over the three-dimensional structure. All the unit operations required are demonstrated by enzymes, and by organic chemists; we’re just controlling where they happen by positioning the molecules better.
In this way, we can use these protein machines to make other machines, better than protein machines, that don’t burn easily, or that don’t have to operate in water, or that are as hard as diamond. These machines, in turn, will be able to assemble almost anything. That is, if you design a pattern of atoms such that all atoms look like they’re pretty happy locally — so that a chemist would say, “These atoms are bonded in a reasonable way” — then (with some exceptions that don’t seem to be important for engineering purposes) you should be able to make molecular machines manipulate molecules and assemble that pattern of atoms. And this will be a fundamental breakthrough.
In the past, we have either used materials built by the molecular machinery in nature (things like wood, leather, and so forth), or we have taken a bunch of rocks or other materials, and pounded them, mixed them, cooked them, or stretched them, and ended up with things like metals and plastics. But when you look at the typical plastic do-hicky, it’s not a particularly clever object when you consider how many atoms it has in it and how little it does. When we eliminate the constraints of traditional manufacturing methods, we’ll be able to do much better.
Some steps have been taken on this path — here is a book [slide] that was published as the proceedings of the First International Workshop on Molecular Electronic Devices, sponsored by the U.S. Naval Research Lab. There was a second such conference where I presented a paper — the proceedings on that will be published this fall. There is good reason to believe that you can make pieces of matter patterned on a molecular scale, to make molecular electronic devices. That will bring circuits to their ultimate limits — and you can make them fast and with low power dissipation.
The British magazine The Economist a couple of weeks ago reported that the Japanese have put $30 million into a molecular electronics program. This is the same technology base that is needed for molecular machines. A company called VLSI Research, Inc. also reports that about half a dozen other Japanese companies have “a full-scale research program in the area.” So, interest is serious, progress is being made, people are designing proteins, they are working on molecular electronics, and it all leads to molecular machines.
So we face a really fundamental breakthrough in technology — to be able to build things on a molecular scale and structure things to atomic precision. What are some of the consequences of this? Well, off hand, you’d expect there’d be a whole lot of consequences because everything around us is made up of matter, and because the way atoms are arranged makes a big difference. The difference between a chunk of coal and a diamond is in how the carbon atoms are arranged. The difference between a healthy cell and a cancer cell lies in the way a very modest number of atoms are arranged.
These machines that are able to build almost anything need a name — call them assemblers. One of the things they can build, since they themselves are patterns of atoms, will be copies of themselves. So assemblers lead directly to replicators. In evolutionary terms, creating assemblers is like reinventing the ribosome. It will give us a new programmable molecular device that can make much more general sorts of structures than were possible before. We will have molecular machines that can copy themselves — much as bacteria can, but without the ecological constraints faced by bacteria. That is, potentially without those constraints. You can give them different constraints. You can have them do useful things, like replicate a ton of them — starting with one — which takes a matter of a day or so. Then, if each one of them incorporates a nanocomputer (I’ll get to nanocomputers shortly), you can program them to team up and build something else for you. Such as a rocket engine whose structure, instead of being made out of metal, is made of a diamond-fiber composite with tens of times the strength-to-weight ratio of metals, and therefore, much higher performance. Such as a lot of other things also — super-strong materials, lightweight refractories, miniature components, all sorts of materials and devices with space applications.
Regarding nanocomputers (which will turn out to be very relevant to what I’m not discussing right now, but will shortly), it’s easy to find a lower bound to what you can do with molecular machines to improve our ability to do a lot of computation in a tiny volume. A chip today can be seen as a slab that has a certain thickness of active material and an area about a centimeter on a side. If you look at chips (of a few years ago, at least), they had typical line widths of about three microns (three millionths of a meter). If you look at molecular mechanical computers, instead of transmitting signals down thin wires, you’re transmitting them down even thinner rods. You push and pull them, or send vibrations down them. It’s a tin-can telephone approach to signal transmission. The best rod material consists of chains of carbon atoms, alternating triple and single bonds, called carbyne. The rods are about three angstroms in diameter, compared to three microns for wires on chips. The ratio of angstroms to microns is ten to the fourth in linear dimension; in volume, you have to cube that, giving us a factor of ten to the twelfth (a trillion).
This seems to be a reasonable approximation. Even more detailed examinations give ratios within 50% of this figure. So you’ll be able to shrink the active volume of a chip-equivalent device by a factor of a trillion. And, because you’re not limited to “spraying” features onto surfaces, you can make this device in the shape of a little block. It turns out, if you run through the numbers, that you can make something that’s about the equivalent of the processor in the Apple Macintosh and put it in a volume that is somewhat less than one-thousandth of a cubic micron. So you’re talking about being able to put on the order of 1000 Motorola 68000 CPU’s in the volume of a bacterial cell. And a bacterial cell, in turn, is about a factor of one thousand smaller than a human cell. So we’re talking about being able to put roughly a million microprocessors in the volume of a cell (if you leave no room for anything else).
But I’m beginning to discuss this in terms that start to sound like the topic that I’m not talking about, that is, life extension. But, this is a natural size comparison to make. I’ve already talked about another life and-death issue — replicating assemblers that might not be subject to the ecological constraints of bacteria. This same technology-base will also provide an industrial base that can very rapidly let us make huge quantities of products with unheard-of performance. And because this industrial base will rely on self-replicating machines, all of a sudden the relationships between labor and capital and rate of economic growth change by orders of magnitude. So we’re already talking about things that are very important economically, and things that are very important strategically (you could program replicators to be a much more useful and nasty form of germ warfare). So there are issues of life and death importance that I’ve already discussed, without saying a word about life extension.
Now I proceed to say, as I do in talks to space audiences, “Well, gee, if you can put a computer into roughly a millionth of the volume of a human cell (and give it a lot of memory and still only use a fraction of the volume of the cell — using about five cubic nanometers per bit for random access memory, and 0.02 cubic nanometers for “tape”) then in the volume of one cubic micron (about one-thousandth of the volume of a typical human cell) you can put as much information as there is in the cell’s DNA.” So you can put a lot of information and a lot of computational capacity into a cell. Molecular machines will be able to sense molecular structures and decide what to do: “Gee, this crosslink shouldn’t be here — what should be done about that?” Well, it can then use molecular tools to cut the crosslink, repair the molecules, and set things back the way they’re supposed to be.
So it’s pretty clear that some kind of cell repair machine is possible. And it’s pretty clear that bringing something like surgical control to the molecular scale will mean a dramatic breakthrough in medicine. The life extension implications are obvious. But in my general space talk, I don’t mention them. I have asked how many people are interested in signing up for the MIT Nanotechnology Study Group, and most people say yes. Interest has been strong; people would come in and we’d talk about these things further. So here we have a set of ideas that makes it clear that there will be tremendous breakthroughs in life extension, but this conclusion follows from a complex of arguments that are important for their own sake. In short, you don’t have to directly ask people to worry about questions of “Can they avoid dying?” And that turns out to be a good way to avoid resistance in selling life extension.
To make effective cell repair machines, it seems we really do have to have computers sitting in to direct operations. The molecular machinery of a cell can build a cell from scratch without having to recognize the details of a complex situation — it’s in a comparatively simple situation. It gets chemicals from outside its membrane and it has an internal program that directs it to go through a series of operations, build a bunch of things, and expand itself to form a larger cell with two sets of chromosomes which eventually divides into two cells. That doesn’t take a whole lot of smarts. The cell doesn’t have to recognize a detail ed, three-dimensional pattern of molecules and then do something about it. But to repair a damaged cell, you do have to recognize a complicated pattern of molecules and decide what to do about it. Therefore, if you want to have a general, powerful cell repair capability (there are certainly some useful capabilities short of this), then you’re going to need on-site computers. Fortunately, as we have seen, these turn out to be possible.
A small, historical note: back when I was in college, I was interested enough in cryonics (I had read science fiction and so on), that I got as far as saying, “Well, I wonder if they’ve looked at the phase diagram of water, and at what would happen to an organism if you go to a high pressure, cool without freezing, and then suddenly increase the pressure a lot?” I got as far as finding out about baroinjury (pressure damage as opposed to freezing damage) without getting far enough to hear about baroprotective agents. At that point I said, “Well, these cryonics people are probably wrong, because there probably aren’t enough variables to play with. It’s a nice idea, but it probably won’t work. They’re probably a bunch of crazies.”
Then, years later, I was exploring molecular technology. And, of course, if you’re studying molecular technology, you study the molecular systems of life, as well as novel molecular machines. It wasn’t too long before I said, “Hey, you could do cell repair with this. I’ll bet you could even repair frozen tissue with this!” And I proceeded to construct an argument that this was in fact possible; what you’re hearing today is part of a much-refined version of that argument, which now rests on a lot more numbers and detail. So then I went and dug out a copy of Ettinger’s The Prospect of Immortality from the MIT library, and there, lo and behold, I found out that these crazy cryonics people not only were right, but they even knew why they were right, that in the future we’re going to have molecular repair technology. Ettinger wrote of repairing cells molecule-by- molecule if need be. Of course, he didn’t have the numbers to demonstrate this, and there was still the question of how we would get there. But he had the basic physical perception that we’d develop molecular-level repair machines, and that doing this doesn’t conflict with any physical law.
So, from molecular technology, to cell repair, we arrive at questions of cryonics. Now, as I said, we’re really talking about two things simultaneously. One is how I present these ideas when I address audiences with general technical interests, and what their responses are. The other is the technical content itself. Regarding the first, let me give an example. I gave an MIT seminar, four days on nanotechnology, one of them on cell repair machines. Somehow, the conversation that day naturally turned to freezing damage. In answer to such questions, I would of course say, “Yes, it seems that such damage could be repaired.” People would ask about the nature of memory, and I would answer, “Well, it seems to be embodied in fairly robust structures.” We had a retreat up in New Hampshire later, in which it turned out that, yes indeed, people were highly motivated by the more conventional life-and-death issues in nanotechnology, but by this time they were also intellectually convinced immortalists, as a side effect. A number of people in this nanotechnology study group later reviewed the most recent draft of my book — “Engines of Creation,” which I’m working like mad to try to finish for the end of the month for Doubleday. It has three chapters on life-extension and cell repair machines — the last of which discusses cryonics. And the reviewers made suggestions like, “Give this chapter a more explicit title — emphasize it! It’s important, and it will help sell the book.” Now, I’m not sure that it would help sell the book to people who hadn’t been exposed to the same ideas as they had. But the interesting thing is that after going through that process, they were indeed thinking that way. Remember, these are technically oriented people, who weren’t out looking for approaches to life extension. But they became interested in this new technology, with its implications for computers, materials, spacecraft, and economic production — bringing new strategic dangers and new strategic opportunities and a host of familiar kinds of life-and-death issues. And they found that life extension was a natural part of it, and they soaked it up without ever being prodded with the question, “You do want to live forever, don’t you?”
My expectation is that as knowledge about this field spreads, and as concern about its consequences spreads, many people will find their interests hooked. I plan to spend the next few years making that process go — as much of my time as I can free up from less useful ways of making a living. As this happens, we’re going to find there’s an expanding community of people who naturally think that life extension is inevitable, and who as a matter of course recognize that cryonics makes sense. Look forward to this. Think about what to do in this situation. I think the best strategy, from your point of view, is to let these conventional life and death concerns motivate interest in molecular technology, and then to reap the resulting harvest of interest in cryonics.
Let’s return now to the more technical aspects of really thorough tissue repair. In the paper I’ve been working on, I go into a lot of detail regarding a more-or-less worst-case example of total-body cell repair. The assumption is that you have to rework all the molecular structures in every cell bit-by-bit, and that you aim to do this with systems that are entirely inside the cells. (I also discuss how to relax this second constraint.)
In a cubic micron, you can construct the equivalent of a mainframe computer with a gigabyte of memory (I already mentioned that this is about as much information as the cell uses to construct itself in the first place). It turns out that you have enough computational cycles within the volume, time, and heat-dissipation constraints to identify all the macromolecules of the cell (even if they’re moderately damaged), by using certain algorithms that can already be specified in fairly great detail. Since you can identify all the molecules, you can map the cell structures: the patterns that you recognize are type-tagged by the molecules they contain (i.e. if it contains tubulin, it’s a microtubule). Since this tells us the type of structure, it makes it easier to know how to probe and further characterize the structure.
You can get the machines into cells: white blood cells demonstrate that systems of molecular machinery can move through tissues. Viruses demonstrate that systems of molecular machinery can move through cell membranes to enter cells. The mobility of organelles inside cells demonstrates that systems of molecular machinery can move around inside the cell. The fact that cell biologists can stick needles into cells and do surgery on chromosomes and sometimes have the cells survive shows that things can enter cells and do even very crude manipulations without doing permanent damage in many cases. So you can get repair machines to the site of the damage.
You can identify, take apart, and put back together molecular structures. Identification is demonstrated by molecular structures that can identify each other, as antibodies recognize proteins and so forth. For the “take apart” function, we have the direct analogy of digestive enzymes. As for assembling molecular structures — well, these things were made by molecular machines in the first place, so again we have a direct analogy. So, again, and again, and again, you can go to a biological analogy and say, “We already know a process like this.” If the overall process is orchestrated into a computer (which you can design to some degree of detail using direct calculations and scaling relationships) then it seems you have everything necessary to repair cells. I have, of course, only sketched the case here, but even these facts are enough to make the idea plausible.
Since I’ve been talking straight through here for quite a while, I’d like to stop and ask for questions.
AUDIENCE: Could you say a few words about the electron-tunneling or scanning-tunneling microscope (STM)?
DREXLER: Yes. In 1982, some researchers at IBM-Zurich came up with a device which has a very fine needle point (positioned by piezoelectric crystals) that is held very close to a conductive surface in a vacuum chamber. When you move the needle very close to the surface, electrons tunnel across the vacuum gap. The current becomes very substantial when the needle is very close to the surface and drops off when the needle is further away. And it turns out that you can move this needle to a precision that’s a fraction of an atomic diameter. Well, the ability to do that looks a little bit like what’s needed to build an assembler, since an assembler is something that manipulates reactive groups to atomic precision, and this is something that moves a needle around to atomic precision.
I looked at this in ’82, and I said, “Oh, no! This just might be a shortcut to a technology that strikes me as being very dangerous,” so I said nothing about it. After all, we can already save lives with this technology simply by having people understand it: if they understand it, then those who really care are going to sign up with cryonics societies and save their lives. Outside this though, this technology can be used for mind-control systems, and things like programmable germs for germ warfare. These are possibilities that I’d rather see develop later rather than sooner because I think that you’ll live longer that way, and have a greater chance of living indefinitely that way. With more time to think, we’ll be better prepared. There’s room for disagreement on that, but that’s my analysis.
STM technology may get us to nanotechnology faster — it’s hard to say – – but I don’t use this in my explanation of nanotechnology because by using the protein design route I can say, “Look, there already are molecular machines. We can learn how to build similar machines and use them to build better ones.” It’s a more compact, concrete discussion and it also, I think, is more persuasive. I can really nail down every point in the scenario. The STM may be a shortcut to the same goal, but if so, it’s not obvious how it will work. What is important today is to make assemblers and their consequences credible. The protein-design route does that.
AUDIENCE: One thing that I think should be pointed out about the scanning tunneling microscope is that it does not work just in a vacuum. They are now getting pictures back from air, and water, and oil, I believe. And there was a picture just recently in Scientific American of the surface of a silicon crystal and you could see all the atoms. It really has atom- level resolution. This was all done at IBM-Zurich. There are a few others around, but they’re fairly cumbersome machines. Very quickly we’re going to have the ability to machine things on that level. It’s here.
DREXLER: This stuff is coming right along. Watch out.
AUDIENCE: Could you elaborate on your scenario — what you meant by you hope later rather than sooner?
DREXLER: First of all, I believe that the emergence of this technology is inevitable (barring some worldwide devastation or totalitarian state). There are competitive pressures, there are many different roads to it, and there are no sharp lines between this technology and what we’re doing right now — just a series of steps. Between now and then we can try to build institutions and a climate of opinion that makes it more likely for these developments to lead to a world where people are free to choose how to live within very, very broad limits, as opposed to their being dead or enslaved. One of the requirements is that we stay ahead of the efforts of less pleasant governments elsewhere in the world. This is a big reason for not trying to hold it back. I argue that forcefully in my book. What can make it desirable that we don’t rush is the value of gaining a better understanding of what we’re doing. This is a technology where considerable foresight is both possible and profitable. Much more so than we’ve seen, I think, in previous technologies. We already understand the physical laws behind these machines. By biological analogies and calculations, we can already see a lot of what they can do. So you can look at the kinds of safeguards needed to control replicators and use them to do what you want, instead of having a disaster happen. We can develop the institutions that are needed to handle this stuff without something nasty happening.
It’s clear, I think, that the defense department is going to play a big role in this. The strategic implications are greater than those of nuclear weapons. There is at some point going to be an effort as urgent as the Manhattan Project; there’s going to be a race. When that happens, it would be best if people understood that this race can lead to a situation where we can have vast material abundance, long life spans, and a lot of freedom — if we play our cards right. Rather than saying, “What this means is that we’ve got to stay ahead of the Japanese,” or something silly like that. We don’t want to race against people who ought to be our allies in carrying through and handling these breakthroughs. It would be nice if people understood the consequences of this so they won’t just let the Pentagon do it. The military dimension of this must be kept firmly in mind, but we really need a set of institutions that ties this into the civilian world. We’re really talking about a genuinely unprecedented concentration of power in the hands of the groups that carry through certain breakthrough steps, and we have to prepare to deal with that, if we want to keep our lives and freedom.
AUDIENCE: You mentioned the problem of predicting the natural folding of a protein and that it would take an incredible amount of computer time to work it out. You also said that it’s a simpler problem to design a protein to make a particular shape. Do you have any numbers which can give you confidence that this is possible? After all, if it takes a billion years of computer time to predict the folding of a natural protein and designing it is a hundred times easier, we’d still be dealing with something impractical.
DREXLER: It depends on “a hundred times better” along what dimension. If it were a hundred times better just along this computer-time dimension, you’d lose badly. But if it were, say, a factor of two better in the degree to which the energetics favor a particular folding pattern, that factor of two could be enough to eliminate all possibilities except the desired one. And in fact, what you’re adjusting, what you have direct design control over, are matters of geometry and internal interaction energy that affects the exponent of the relationship that gives you the billion of years of computer time. So a little bit of change here, and the time required drops by orders of magnitude.
The argument that I made in the “Proceedings of the National Academy of Sciences” was subsequently picked up by an author in Nature, who said, in effect, “Drexler argues that protein design is possible, and here is how we might go about doing it.” My paper was also referenced by an author in Science who was writing about molecular engineering, who said, in effect, “We’re making progress in protein design, and it will lead to the ability to structure matter to atomic precision.” Progress is being made in protein engineering, and at an increasing rate.
With respect to the protein-folding problem, there’s an evolutionary argument for why engineers should be able to do a lot better. The argument is that nature hasn’t even been trying to do a good job at what we need to do. What nature is “trying to do” in the evolution of protein molecules is to make proteins that, in fact, under ordinary physiological conditions, fold to the right shape with almost 100% probability. As long as it’s very close to 100%, then from the point of view of natural selection, the folding process works essentially perfectly. Now, imagine that you had a protein molecule that folded by a really well-defined, obvious, predictable, energetically favored mechanism. On a computer, it would be very clear at each step what the next step would be. But if that protein were allowed to evolve, you’d see random substitution of amino acids — and most substitutions will make the folding less predictable. There won’t be any selection pressure to keep this from continuing until the protein gets right to the edge of stability (or whatever the analog to “stability” would be for following a desired folding pattern). And at that point the protein would fold correctly in nature, but in a way hard to predict from its structure.
AUDIENCE: How close is present day research and technological practice to what could be described as molecular technology?
DREXLER: My understanding is that people are now, in micro-electronics, doing experimental work on laying down fine patterns on crystal surfaces, some only a few dozen atoms wide, something in that range. They’re working on the right scale, but they don’t have control of where the atoms are, and therefore, it’s not molecular technology. Now, in another area, the scanning tunneling microscope does give you atomic resolution of position, but so far only for measurements. In a third category is synthetic organic chemistry, where people do make all sorts of interesting molecular structures, and biotechnology, where we get back to protein design.
AUDIENCE: I’d like your views on a possible alternative to cryonics for preserving people: morphostasis. What are the trade-offs between morphostasis and cryonics?
DREXLER: Morphostasis is a term I think Ettinger coined in response to some of my ideas. Cryonics is a form of morphostasis. Basically, morphostasis is nailing everything down at the molecular level and holding it in place. Freezing does this just by lowering temperature: everything solidifies and molecules are held in place, and this is nice because you’re in a position to apply future technology to present day medical problems. Cell repair machines are primarily (if you look at the tools they use) molecular repair machines. The only reason that it’s reasonable to call them cell repair machines is that, if you can repair and rearrange molecules, then since a cell is a pattern of molecules, you can repair it too. One of the implications of this is that if you’re using cell repair machines to reverse a suspension process, then you find that you don’t care about minor covalent modifications to the protein molecules, DNA molecules, and so forth, in the cell. Crosslinking is a terrible thing if you want to run around and be active and healthy. But, as soon as metabolism is shut down, having a whole bunch of crosslinks to hold molecules in place with nice, solid covalent bonds becomes a way of preserving structure and information. From all I’ve seen in the literature, you’d still prefer to cool tissue down to liquid nitrogen temperatures. But stabilizing by crosslinking does give you another degree of freedom that becomes reasonable, once you plan for cell repair machines.
Incidentally, regarding the reversing of suspension procedures, any way that you preserve people with near-term technology seems likely to require cell repair to reverse. Even if you could get to the point where you could cryopreserve a mammal through freezing or vitrification or whatever and then revive it, I think — barring some really amazing breakthrough — that you’re going to have a very sick mammal. I for one, if I were in a cryo- preserved state, would want to put in my contract, “Please wait until you develop good cell repair machines. And don’t wake me up to be a very sick mammal, possibly with neurological damage that is such that I’m going to lose information.” It would be better to stay in liquid nitrogen until the cell repair technology matured. (An auxiliary argument is that, in fact, by the time we get to cell repair machines, technical progress will be amazingly swift because we’ll also have automated engineering systems that work about a million times faster than human engineers. That factor of a million is not chosen arbitrarily, but comes out of physical calculations comparing neural and electronic systems.)
AUDIENCE: This is probably not the actual type of damage that occurs during freezing (crosslinking is probably more likely and less of a problem), but I want to throw it at you anyway. Let’s assume that you have a 100-amino-acid chain. Each recognition site requires a specific recognizer. There may be five different kinds of injury that could occur at each site, each requiring a different effector to repair. All of this information has to be transmitted and manipulated. All of this requires a tremendous amount of paraphernalia over and above the just brute amount of information that you’re talking about. Does that trouble you?
DREXLER: You’re absolutely correct, and it doesn’t trouble me because I’ve gone through the design exercises and calculations on a relatively detailed level on how to do that. Basically, the molecular machine goes along and “sees” the chain by touch and reads the sequences into central memory. It only needs recognizers for the twenty-odd amino acids (or it just probes shapes). After reading the sequence, we’re in data processing land instead of molecular machinery land. Mechanical signal transmission elements with a diameter of a few nanometers will transmit information at about a billion bits per second (this comes out of mechanical calculations), so the information can be shipped to a central computer in the cell. (The data rates and volumes all work out OK.) So now the information is in data processing land, and in a central computer. It turns out that you can work out an algorithm that basically works down a tree structure that encodes possible normal protein structures. You can work out the details of what it takes to do an identification of a protein molecule in machine cycles on an 8080 chip. With this you can figure out how long you have to calculate and what degrees of damage you can tolerate in the protein. Once you know the protein, you can just look up what it’s supposed to look like and just go from one end to the other, pull out the parts that don’t match, and put in the right ones. The steps involve stereotyped sorts of bond-breaking and bond-formation, requiring only a small set of tools.
AUDIENCE: Why not just trash the whole thing and put in a new one?
DREXLER: The problem is that it sounds radical: “What happens to personal identity if you replace the molecules?” and that sort of thing. Actually, I’m inclined to agree with you: but to be conservative, I’ve studied a scenario where you don’t throw away, you repair. That’s not quite as easy, but it’s feasible, and it’s maximally acceptable in terms of raising the fewest possible issues of personal identity.
AUDIENCE: You should be able to take whole cells and just trade them in. Especially if you do it cell by cell.
DREXLER: I’m sure you could set up a machine that could just go through your brain like that, that could work its way from one end to the other, and you’d never notice the difference.
AUDIENCE: In a sense, that’s just what your body does. It identifies bad cells and then junks them. The problem is, it doesn’t put new ones back!
AUDIENCE: I can foresee situations where the molecular machines won’t be able to proceed autonomously, but will need to collaborate. How will they be able to be coordinated in large numbers?
DREXLER: Yes, in many cases you’re going to want to have a whole bunch of molecular machines in communication. The conceptually simplest way of doing this is to set up a serial data channel that works by pushing and pulling rods. You just have a little jointed cable about two nanometers in diameter that goes from one machine to the next and carries data at a gigabaud or so. In fact, if you get really hard up for computational capacity inside cells, it turns out that you can ship a complete molecular description of the entire body out of the body through data channels that occupy only a tiny fraction of the volume of the skin cells they pass through. Then you’d have all the information externally, where you could process it using computers that are far less volume-limited and heat- dissipation-limited. Finally, you’d ship the instructions back to direct the cell repair machines.
COMMENTATOR: I’m sorry we’re out of time. Thank you very much. [Much applause.]