Cryonics, September 1990
by Steven B. Harris, M.D.
“The human mind treats a new idea the way the body treats a strange protein;
it rejects it.”
— Biologist P.B. Medawar
The history of technological innovation is the history of the tortuous paths which advances often take to acceptance. It might seem at first, from the many well-known instances of simultaneous discoveries, that it is the nature of important ideas to spring up newly everywhere, independently, as soon as the world is ripe for them. But this is only the view at first glance. In actuality, the “synchronicity” of discovery usually turns out to be a late phenomenon, one that follows a prodrome in which the “new” idea in question has long been around in some form or another, but steadfastly has been ignored.
How long can an important idea be ignored? The model steam engine was demonstrated by Hero of Alexandria in the first century A.D., sixteen centuries before people started thinking along these lines again. Gregor Mendel published the basic principles of genetics in 1866, and was ignored until 1900. Oswald Avery published strong evidence that DNA was the principle of heredity in 1944, but no one really believed it until the time of Watson and Crick almost a decade later. The time varies, depending on circumstance.
Delayed acceptance of discovery happens in all areas of science, of course, but it always happens in the field of medicine with great poignancy, since there the human costs of dropping the technological ball are usually great. We may consider, for instance, the numbers of lives which might have been saved if not for the following delays:
- Leeuwenhoek invented the microscope in 1668 and saw animal cells and protozoa with it–but unfortunately for humanity, doctors weren’t interested in that kind of thing in 1668, and wouldn’t be for another couple of centuries. In the meantime they missed out on the germ theory of infectious disease; thus, as late as 1850, when good Doctor Semmelweiss tried to get his Hungarian colleagues to curb the incidence of fatal “childbed fever” by washing their hands between dissecting diseased cadavers and examining patients, his colleagues responded by hounding him out of his job. Meanwhile diseases continued to spread on the hands of well-meaning doctors.
- Several explorers like Sir Richard Hawkins independently discovered the antiscorbutic properties of oranges and limes in the 18th century, and James Lind in 1754 even published the results of a controlled experiment in which he showed that citrus was superior to other folk methods for the curing of scurvy. The world, however, was not ready for the discovery, and sailors continued to suffer and die from this quite treatable nutritional disease for more than half a century after Lind’s demonstration. Scurvy was also rampant among the troops of both North and South during the American Civil War, though the means was available to prevent it, and as late as 1912 the famous explorer Robert Falcon Scott died on his way back from the South Pole, probably as the result of scurvy.
- An investigator before the First World War discovered the curative powers of penicillium mold extracts on infected animals, but could not interest his colleagues, although he published the work. It remained for Alexander Fleming, ignorant of the earlier work, to rediscover the antibacterial effect of penicillium in a laboratory accident in 1928.
- Alexis Carrel, the French-American scientist who won the Nobel Prize in 1902 for techniques of suturing blood vessels, demonstrated in 1910 that a saphenous vein graft between aorta and main coronary artery in animals could bypass a blockage there, and speculated that the technique might be useful in the treatment of angina. Although Carrel (with aviator Charles Lindbergh) later went on to develop the heart-lung machines that would make such surgery possible, the medical community contented itself for the next half-century with ineffective treatments for severe coronary heart disease, and it was not until 1967 that the saphenous-graft coronary bypass operation was employed on humans.
To the historian, some medical fields seem more plagued with delays in the acceptance of new ideas than others (the medical study of infectious disease has been prominent in this dubious regard, as noted), and the above examples are sad enough. Still, there is possibly one field of medicine which is at least the equal of infectious disease in its record of ignoring proven lifesaving strategies for the longest time, and that is the area to which we will turn for the remainder of this essay. The medical field in question is that of resuscitation, which is the art of restoring clinically dead people to life. It will be of no surprise that many of the issues related to it which have been debated in history are also familiar to cryonicists. For example: when exactly is a person “dead,” and how do you tell?
Cryonicists looking into the history of resuscitation may find themselves reading with a sense of déjà vu. We’ve seen these controversies already, and we’ll see them again. Perhaps we can profit by exploring them further.
Historically, the art of resuscitation turns out to be old. The idea of resuscitating a seemingly dead person by more or less physical means occurs in the Hebrew scriptures. Both I Kings 17 and II Kings 4 contain descriptive elements of resuscitation by chest compression. In II Kings, Elisha also places his mouth on the child’s mouth. Clearly there is something more than mystical prayers and incantations going on. Perhaps the oral traditions which were later codified into these tales once contained descriptions of one or more real resuscitative events.
By a few millennia later, things were better defined. Italian writings of the 15th century indicated that midwives had, even then, long been using mouth-to-mouth breathing techniques to resuscitate newborns who did not spontaneously breathe. These techniques were soon to be imitated in the mechanical experiments of the Enlightenment. Paracelsus (1493-1541), an alchemist and perhaps the greatest physician of his age, was said to have attempted the resuscitation of a corpse using bellows, a trick he perhaps picked up from Arabic medical writings. And Andreas Vesalius (1514-1564), the father of modern anatomy, reported successfully using bellows to resuscitate asphyxiated dogs.
Bellows may not always have been available, but physicians eventually learned (possibly again from laymen) that simple mouth-to-mouth resuscitation sometimes worked on recently asphyxiated adults just as it did on newborns. By the 1740s, several cases of successful mouth-to-mouth resuscitation had been reported, the most famous of which was Tossach’s 1744 report of the resuscitation of a clinically dead coal miner who had been suddenly overcome after descending into a burned-out mine. By the 1760s, in the wake of such reports, a number of groups advocating the resuscitation of drowned persons had sprung up in Europe. The thinking at this time in many places was strikingly modern. Here, by way of example, is a quote from a 1766 governmental edict from Zurich:
. . . Experience has shown that the drowned who are considered dead and that lay for some time under water have often been restored again and kept alive by proper maneuvers. From which one rightly concludes that life has not been completely suspended in the drowned, but that there is hope to save them from death if, as soon as they are withdrawn from the water, prompt and careful help is administered.
The Swiss may have been their usual regulation-happy selves about the subject, but in the rest of the Western world resuscitation was being pushed typically by entirely private societies (voluntary clubs). In 1774, a society was founded in London to promulgate the idea of attempting to resuscitate the dead in some circumstances. Called, after a bit of experimentation, the Society for the Recovery of Persons Apparently Drowned, it quickly evolved into the Humane Society (and still later, with official patronage and funding, the Royal Humane Society).
The Humane Society advocated techniques which were highly advanced. Three months after the society’s founding, as an example, a society member had the opportunity to minister to a 3-year-old child named Catherine Sophie Greenhill, who had fallen from an upper story window onto flagstones, and been pronounced dead. The society member, an apothecary named Squires, was on the scene within twenty minutes, and history records that he proceeded to give the clinically dead child several shocks through the chest with a portable electrostatic generator. This treatment caused her to regain pulse and respiration, and she eventually (after a time in coma) recovered fully.
[This story and other direct quotations, unless otherwise noted, are taken from The History Of Anesthesia, Richard S. Atkinson and Thomas V. Boulton, eds., International Congress and Symposium Series, #134, Parthenon Publishing Group, NJ, (Parthenon).]
The resuscitation of little Catherine Greenhill was probably the first successful cardiac defibrillation of a human being, and it followed earlier suggestions by American scientist Benjamin Franklin and others that electricity might possibly be used to “revivify” the human body. And so it proved able to do in certain circumstances. In 1788, a silver medal was awarded to _Humane Society_ member Charles Kite, who was by this time not only advocating the resuscitation of victims in cardiac arrest with bellows and both oropharyngeal and nasolaryngeal intubation, but had also developed his own electrostatic revivifying machine which used Leyden jar capacitors in a way exactly analogous to the DC capacitative countershock of the modern cardiac defibrillator. (I must confess that to my mind all of these contraptions are as fantastic as the devices in a Flintstones cartoon, yet they actually existed. A time-traveling physician from the present could not have put together a better resuscitation kit, given the technology of the time.)
However amazing its progress was, the enlightened state of the late 18th century as regards resuscitation was not to last. From the very first, dark images from the human psyche began to gather in resistance to the new ideas. Technology never intervenes in a major way into the borderland between life and death without creating major anxieties and social backlash. Resuscitation had its problems.
To begin with, as the modern reader may guess, the 18th-century discovery that “death” was not a sure and objective state, did not exactly sit well in the public mind. Our historical friend Charles Kite was of the opinion that not even putrefaction was a sure sign of permanent death, since it might also be due to advanced scurvy(!). However conservative this view might have been for Kite and his medical agenda, the public had its own concerns. If one could be mistaken for dead when one was in fact resuscitatable, what else did that imply?
The answer, of course, is that it implied that you could be buried alive. Not long after the first word-of-mouth reports of adult resuscitations began surfacing in the 1730s, the French author Jacques Benigne Winslow published a book descriptively titled The Uncertainty of the Signs of Death and the Danger of Precipitate Interments and Dissections. Now the real problem with the difficulty of defining death in a technical age was out of the bag: What if you got the diagnosis wrong?
The result of this realization was a psychological terror perhaps made familiar to the reader by some of the works of Edgar Allan Poe. But Poe, popularizing the problem for early 19th-century America, was late to the controversy. In 18th-century Europe the fear of premature burial and dissection was not just the preoccupation of macabre writers; whole classes of people were affected, albeit in different ways. Upper-class persons took to fitting coffins and crypts with special signaling devices which could be used to alert the outside world in case the occupant should inexplicably revive. The lower classes had their own special problems, too, since anatomical dissection (long a part of the punishment for heinous crimes because it denied the malefactor an intact bodily identity or a grave) had now taken on a special meaning. Here, as example, is what Ruth Richardson says of the dissection of criminals in her treatise Death, Dissection, and the Destitute, describing an incident in the 1820s in which one dissecting anatomist at Carlisle was killed, and another severely wounded, by the friends of an executed man:
. . . Although this was of course an extreme reaction, it was certainly the case that hanging the corpse in chains on a gibbet was popularly regarded as preferable to dissection. What later incredulous commentators seem to have missed or misunderstood was that in eighteenth and early nineteenth century popular belief, not only were the anatomists agents of the law, but they could be the agents of death. Genuine cases were known of incomplete hangings, in which the ‘dead’ were brought back to life, and plans for celebrated corpse-rescues centered on the possibility that the noose had not fully done its work. Folktales circulated about famous criminals revived by friends, and these ideas were fostered by the publicity which Humane Society resuscitations attracted after apparent drownings. Increased control over the body of the condemned rendered rescue and revival virtually impossible.
It was popularly understood that the surgeon’s official function and interest in a murderer’s corpse was not to revive, but rather to destroy it. Dissection was a very final process. It denied hope of survival – even the survival of identity after death[!]. Above all, it threw into relief the collaborative role of the medical profession in the actual execution of death. The Carlisle surgeons bore the brunt of the resentment and frustration felt by the dead man’s friends, for in their eyes the doctors had murdered him more surely than the hangman’s rope.
[Note: The denial of the body of the heinous criminal to the family has had a long history in law, and we see it historically employed in capital crimes which particularly outraged the public, even in relatively recent times. For instance, after execution in 1865 the bodies of the four Lincoln assassination conspirators were immediately buried in Army equipment boxes a few feet from the gallows in the prison yard in Washington’s Old Penitentiary, the same institution where the body of John Wilkes Booth had been secretly buried a month earlier. In 1901, after anarchist Leon Czolgosz was electrocuted for the assassination of President McKinley, his body was dissolved in acid in the prison basement. One cannot read such accounts without a deeper appreciation for the psychological power of the freshly dead body in an era when resuscitation was still somewhat magic. Even as late as 1946, after the ten members of the Nazi high command were hanged at Nuremburg, the Surgeon General of the United States himself was turned down when he asked that the brains be removed, preserved, and sent to Washington for study. Instead, the bodies were cremated immediately at Dachau and the ashes secretly scattered, with the specific intent that nothing remain. One may read into official penal policy in all these cases a more or less unconscious desire to destroy what was perceived as the continuing identity of persons already pronounced dead.]
By the end of the first quarter of the 19th century, when the riot over the dissection of the hanged man at Carlisle took place, things had reached a fever pitch. With scientific resuscitation, technology had intruded into the macabre. The horrific potential of the new electromechanical resuscitative technology had its influence on Mary Shelley, who in 1818 had first set out to write a ghost story, but instead ended up producing a cautionary tale of the technological resuscitation of a soulless corpse by a medical experimenter. Given the spirit of the times, the story touched a public nerve as though with one of the new electrical machines, and Frankenstein’s monster was an instant sensation.
And then something strange happened. Shortly after the publication of Shelley’s famous story, the new medicine began to go out of favor, and the science of resuscitation began to suffer on both the technical and mythological fronts. It happened for several reasons.
It is the propensity of all social movements to go too far. The Humane Society‘s problem was that, when it came to complicated biology, the late 18th century did not possess the experimental expertise necessary to separate the wheat from the chaff. Thus, within a few years after its founding, the Humane Society had gone from mouth-to-mouth resuscitation to the more impressive use of bellows. Following a number of instances of lung rupture with the bellows, however, these complicated and difficult-to-use devices were discarded early in the 19th century. Mouth-to-mouth resuscitation, unfortunately, was not reinstituted at that time, partly because of the new discovery of life-giving oxygen and the finding that expired air contained less of it (nobody bothered to find out if the difference was significant). For the next century and a quarter, therefore, resuscitative techniques centered around chest massage and armlift techniques, and mouth-to-mouth breathing did not return until the middle of the twentieth century.
Emergency electrical defibrillation fared no better. The new phenomenon of electricity had been transformed early-on into a quack cure by the practice of “galvanism” (passing mild shocks through the body in an attempt to cure disease), and its reputation accordingly tarnished. Later, and perhaps even more devastatingly, the charming new electricity was transmuted into a powerful and dangerous force by the giant transformers of Westinghouse (maligned from the first for their deadliness, in a PR campaign by rival industrialist-inventor Thomas Edison) and by the newfangled American electric chair. Technologies as well as people suffer from social stigmas. Mary Shelley had originally not specified the method of the revivification of her monster, but by 1930, in the new electrified America, Frankenstein’s monster came into the movies electrically charged. The upshot of all these social transformations was that therapeutic electric shock, so full of promise in the 1790s, did not again come into its own for lifesaving purposes (or even for psychiatric purposes, for that matter) until about the same time resuscitative breathing was being reassessed, in the late 1950s.
Other resuscitative techniques like chest/cardiac compression had been used sporadically since the late 19th century as well, but they too did not see acceptance until the late 1950s, when almost inexplicably all of the “modern” techniques came together approximately simultaneously in what we know as “cardiopulmonary resuscitation” (CPR). The world, apparently, was not ready until the Space Age for any of these techniques, and simply rejected them when brilliant and well-meaning scientists invented them too early.
Some General Observations On History
What are we to make of all this? Is there anything to be learned? In looking at the history of resuscitation and medicine we might ask if there are any observations to be made about it which might apply as well to the medicine of today and tomorrow.
The first thing we notice is that there seem to be some themes in medical history which occur again and again. Important medical discoveries, like important philosophical discoveries, seem quite likely to be made by outsiders. In some cases, the “outsiders” in medicine have been doctors working outside the traditional groves of academe, and in others, the important medical discoveries have not been made by doctors at all. Leeuwenhoek, for instance, was a haberdasher, Pasteur a chemist, Fleming a bacteriologist. Recall that mouth-to-mouth resuscitation was the secret of midwives, and passed to medicine quite late. The original Humane Society, though founded by a doctor, was less a professional medical group than a group of ordinary and somewhat evangelistic citizens who (in exactly the manner of cryonicists) had banded together for humanitarian reasons and out of fear of being buried alive. A second observation which can be made about the history of medicine and technology in general is that discoveries depend for acceptance upon a very complex social milieu which may have little to do with technology. A technological advance will not be accepted in a world which is not ready for it socially. The idea of using a steam engine to replace human muscle, for example, will not catch on in a world where human muscle power, because of slavery, is cheap. Conversely, a device like the cotton gin, which replaces delicate work with muscle work, will instantly be accepted in such a world.
For an analogous example of this phenomenon from medicine, we might consider the history of anesthesia. As we know from their writings, Muslim physicians practiced various forms of anesthesia during surgery back as far as the 8th century A.D. In Christendom, conversely, where the idea of redemptive suffering held sway, anesthesia took much longer to catch on. Thus, the anesthetic properties of nitrous oxide had been widely and publicly noted by Sir Humphrey Davy as early as 1798, yet it was not until the 1840s that an obscure general practitioner from Georgia and a couple of part-time dentists (remember our observation about outsiders) began to try out inhaled anesthetics for surgical purposes. Even at that, there was an ecclesiastical outcry when Queen Victoria requested chloroform for childbirth, soon after the first anesthetic demonstration in America. One prominent cleric complained that “travail and pain” in childbirth had been ordained by God in the Bible, and that therefore anesthesia was against the will of God. (Others pointed out Genesis 2:21 where Adam is put to sleep as the rib is taken for Eve. Scriptural wars can be quite inventive.)
What then held up full cardiopulmonary resuscitation until the late 1950s, even though the world had discovered all of its essential features before 1900? We can only speculate, but the answer may lie in the fundamental change in the way which people began to relate to and trust technology between 1900 and 1950 – a social change which is as profound as any generation of humans has ever had to cope with. (See Frederick Lewis Allen’s book The Big Change: America from 1900 to 1950.) Mythmaking, as ever, played a role. If technology first crept into our nightmares with Frankenstein, it later (redemptively) crept into our heroic myths and won some measure of acceptance. Thus, if the new 20th-century technology of aviation was capable of creating a new kind of hero like Charles Lindbergh, the public was also willing to let him have a technological shot at Death with his new artificial heart machine. In any case, the mantle of Dr. Frankenstein had by the middle of the 20th century passed to the physicists and their atom bombs, and medicine for the time being, was at last back in the heroic mode. This situation continued until the development of the modern ICU and “life support,” at which time doctors and medical technology began taking criticism once again.
In the context of some of the foregoing observations, it is interesting to consider cryonics as an unaccepted technical idea. The study of history always offers perspective. Thus, if we cryonicists shudder with dread over the idea of a “premature” burial, or the idea of a viable person being destroyed by the autopsy knife, we may be a bit chastened to find that this conflict is already two centuries old, and not over concerns invented entirely by us.
As a practical matter, it might first be well to remind ourselves of the sources of danger in these situations. It takes only a change in point of view to regard a person in full cardiac arrest as being in a desperate and life-threatening situation for not just a few minutes, but (perhaps) days. This, in turn, may change the whole tenor of the game, for having a loved one in a desperate situation can engender the most desperate acts. Historically, as we have seen, violence has been committed over the question of dissecting a relative who might be viable, and as we have also seen, this very situation is a prime area of potential conflict between cryonicists and society. (We have seen cryonicists taken into custody over this question, though fortunately, not yet for long.) All of this should re-emphasize the need to do tremendous amounts of prior legal preparation, if we are not eventually to be faced with the otherwise inevitable situation in which a cryonicist is charged with the assault of a coroner or pathologist.
Of course, the question of viability holds another danger specially for cryonicists, over and above our potential conflict with government. If a man in the throes of grief is capable of killing on behalf of a potentially viable “deanimated” loved one, then the refusal of “last-minute” cases (no matter the circumstances) places cryonics organizations in a potentially explosive confrontation with the public as well. Here, cryonicists are the potential targets. We have seen cases in the news where distraught relatives have killed ER physicians in the midst of grief and misplaced anger. Might not then the same violent action be directed at representatives of a cryonics organization which was in the position of being (technically) able to rescue a viable person, but (for necessary financial reasons) refused to do so? If history is not to be repeated, it is clear that security concerns are going to have to be paramount for cryonicists in the future.
What about wider concerns? Here, too, the past has something to teach, this time about groups of concerned people who began as outsiders to established medicine, yet later prevailed. Although the cry of “They laughed at the Wright brothers, too!” has long been the defense of crackpots, even a cursory examination of the history of medicine shows that the initial non-acceptance of any important new idea by that profession is almost de rigueur. Thus, although the mere fact of medical non-acceptance does not prove the cryonicists’ cause, at the same time cryonicists certainly do not necessarily need to suffer embarrassment on that score. The long view of things is helpful. At present, it seems likely that cryonicists play the role of the midwives of old, practicing their own peculiar lifesaving ministrations in parallel with medicine. Medicine’s recognition of cryonics, like its belated recognition of resuscitation, will come.
When? Unfortunately, history is not prophecy. The answer from the foregoing discussion, if there is one, is that it will come when society is ready for it. We know that humans are not naturally very good scientists (our brains weren’t developed for that), and very primitive needs and fears drive both acceptance and rejection of new technologies. As we’ve noted, the fear of premature burial stimulated a series of electrical defibrillation experiments in the late 18th and early 19th centuries, all of which then were suppressed for more than a century partly because the idea of shocking people back to life had in turn been killed by a single well-placed monster myth. Human beings and their societies run on good stories, not scientific reports. Similarly, American society of the 1960s, gearing up for a holy war on cancer and heart disease and intoxicated with the Salk-myth of the all-powerful medical researcher, was not ready for cryonics. By the late 1980s, however, when it had begun to become apparent that heart disease and cancer (not to mention aging) were a lot more intractable than polio, there existed in this society at least a subculture that was now ready to listen to another idea for cheating death.
And so here we are. From a strictly technical view, cryonics as we know it might have been practiced 70 or 80 years ago. Technically we might have been ready for it, but culturally we were not. What is more (let’s face it), American society as a whole is still not ready to listen to the idea of radically extended lifespans. The good news, however, is that with the publication of a number of popular gerontology books in the last decade, things are changing slowly. The social milieu (not to mention the age of the population) is changing, and scientific immortalists are getting ready for another try at the hearts of the public. As has been argued in previous essays, this change will require yet another set of new myths (hero stories) to counter Frankenstein’s monster, just as our out-of-body experience stories now let us, as a society, deal with the ambiguity of complex resuscitations from clinical death (see the film Flatliners). In the case of cryonics, the new myths will come, too. We can only hope for all our sakes that this necessary process doesn’t take as long as it sometimes has in the past.