Cryonics, June 1989
by Steven B. Harris, M.D.
As Mayor of the Munchkin City
In the county of the Land of Oz,
I welcome you most regally. . . .
But we’ve got to verify it legally
To see (to see)
If she (if she)
Is morally, ethically,
DEAD. . . .
— The Munchkins
As coroner I must aver
I’ve thoroughly examined her —
And she’s not only merely dead,
She’s really most sincerely dead.
— The Coroner of Oz
Though no man can draw a stroke between the confines of day and night,
yet light and darkness are upon the whole tolerably distinguishable.
— Edmund Burke
The conundrum which struck the political philosopher Edmund Burke more than two centuries ago remains with us today. The condition we term “night,” does indeed turn into the condition we call “day,” and it does so with no sharp dividing line between the two. And yet, we all agree nevertheless that “night” and “day” are clearly different states.
What is more, this sort of thing happens all the time. Our world contains numerous examples of processes in which “state A” is transformed smoothly and continuously into a somewhat different “state B.” Wet becomes dry, for example. One kind of weather verges imperceptibly into another. Organisms grow and change form, and so on.
None of this is necessarily a bad thing, and in fact it is continuous change which keeps planet Earth from becoming boring. Transformation is interesting and pleasant to watch, and it is even more pleasant to watch if it is observed passively with no attempt to classify what one is seeing. But when one begins to think. . . .
This essay will argue that language and its penchant for classification is the tree of “knowledge” which forever disturbs the Eden of human tranquility. Whenever we humans talk, we mark out lines and boundaries in continuous natural processes. Our words do that for us. The boundaries which words create may or may not be there in actuality; we draw them in anyway, because classification and analysis are essential to the human thinking process.
But line-drawing can also lead to trouble. This essay is about the kind of trouble to which it can lead.
Part I. Black and White — and Gray
Let us begin our discussion of the boundaries produced by language by considering a very ordinary transformation in the universe we live in: that of a black cup of coffee being sweetened. Since “black coffee” means, by definition, a cup of coffee with no sugar or cream in it, such a cup of coffee does not start out sweet. Nor does it become sweet if one adds a single sugar crystal and stirs. Nor if one adds two crystals. Or three. But if one has the patience to continue adding sugar crystals one by one, then by the time one has added the many thousands of crystals of sugar in several tablespoons of sugar, one will have arrived at a “coffee state” which will be judged sweet by any drinker whose palate is in working order.
So far, so good. The reader will notice that the cup of coffee has now become an example of the kind of state change of which we spoke in the introduction. State A (not sweet) has been transformed to state B (sweet). But now suppose we ask a naive question: At what point does the cup of coffee become sweet? There is no question that it does make the transition, but suppose what we want to know is which sugar crystal does it?
A little thought will show that the answer is not clear, for the issue is a very subjective one. The problem is that there are many intermediate quantities of sugar which, if added to a cup of black coffee and stirred, would produce considerable disagreement among drinkers as to whether that particular cup deserved the label of “sweet.” The judgement of sweetness is, in fact, quite literally a matter of taste, and varies between persons. It would thus be fair to say that in the matter of sweetening a cup of coffee, NO particular sugar crystal does the deed. As sugar is added slowly to it, a cup of coffee does not become sweet as an event, but rather as a process. “Sweet” and “nonsweet” are tolerably distinguishable at the extremes, as Burke would have said, but “no man can draw a stroke between them” when one is changed slowly to the other.
Judging a Cup of Coffee Objectively
Or can they? Let us suppose now for the sake of argument that a certain society is unhappy with the state of affairs in the transformation of a black cup of coffee into a sweet cup of coffee. Perhaps it is a society of chronically anxious people — the sort of people who are uncomfortable with ambiguity. If a society does not like the judgement of sweetness in a cup of coffee to be a subjective one, is there anything which can be done to make things more objective? More. . . scientific?
Without doubt, a society could certainly go through the motions of being scientific. It could, for instance, begin by defining “sweet” in terms of the sugar concentration in coffee. That would in turn allow an exact calculation of the point at which a given cup of coffee became “sweet” as sugar crystals were added, and it would even allow people to identify the exact crystal which pushed things “over the line.” The only problem with this formal approach, needless to say, is that the decision of where to define the “sweet” concentration in the first place would necessarily remain a matter of personal opinion. A line might be drawn and labeled “sweet,” but it would have to be done subjectively. Thus, an anxious society would only end up back where it started.
Or perhaps it would be more correct to say that scientifically it would end up where it started — but perhaps not politically. Sweetening a cup of coffee, like most occupations, is subject to the addition of the trappings of science, even if it is not subject to the full methods. For unfortunately the more instruments one has and the more numbers one generates, the more objective any process may seem, whether it actually is or not. Thus, although it might not be possible to be more objective about the sweetness of a cup of coffee, it might indeed be possible to fool oneself and others that one is doing so. In anxious societies, after all, formality can be important.
Are there societies which would try to do such a thing, then? We now consider an actual case. For those who found the above example amusing, consider, in place of a cup of coffee, the fluids of a human body. In place of crystals of sugar, consider instead molecules of ethyl alcohol. And in place of the term “sweet,” consider the term “intoxicated.”
In short, consider the matter of drunken driving enforcement. Here, of course, society is faced with a terrible problem. For any given concentration of alcohol in the blood, the amount of driving impairment for different persons will vary significantly. Even the average amount of impairment will vary in a smooth and continuous fashion with increasing concentration of alcohol, so that there still remains the subjective task of deciding how much performance-impairment is acceptable, and how much is not. Unless one simply outlaws having any alcohol in the blood at all while operating a vehicle (as is intelligently done in Sweden), the idea of “drunk driving” is one in which there is subjectivity at every turn.
So what is a society to do? Well, needless to say, things become easier for all concerned if one can pick a semi-arbitrary blood alcohol concentration and label persons who fall to one side of the line as “intoxicated.” Labels do make a difference — in law they often determine at least what charges are filed. Thus, if a man is brought to court on a charge of “driving while intoxicated,” for instance, the burden of proof falls on the defendant once the magic blood alcohol numbers have been given. In other words, once the term “intoxicated” has been applied, the legal defense now has the burden of going through the arguments about subjective standards and gray areas, while all the time the jury is thinking about what a clever job the defense lawyer is doing in order to try to get off a guy who has been scientifically proved to have been drunk.
The Law in General
For the benefit of all those readers who feel emotionally so strongly about the issue of drunk driving that they had difficulty with the preceding discussion, it should be pointed out that it is the nature of human law in most cases to draw lines in spectrums of continuous processes, and the legal definition of “drunk driving” is only one of a million examples. The laws of men are binary, for they recognize just two states: legal and illegal. Unfortunately, the nature of the world, by and large, is smoothly continuous, and that contrast leads to interesting situations.
One can get a parking ticket for parking 24 feet from a hydrant, for instance, but not 26 feet. One can be put in jail for buying liquor the day before one’s 18th birthday, but not a day later, and so on. It isn’t that anyone seriously believes that a parked car is significantly more a threat to fire safety at 24 feet from a hydrant than 26, or for that matter that any one is significantly more mature at exactly 18 years old than a day shy of that age. It is just that one has to draw the line somewhere.
And, of course, one does. The mild paradox of Edmund Burke which opened this essay has much application to the law. The most flagrant violations of the law are often obvious, yet at the same time objective places for marking lines of illegality often do not exist. Indeed, almost everyone passes through a phase sometime during the process of growing up where they first come to realize the basic unfairness of drawing binary legal lines in a continuous world. But just as surely, the resultant cynicism soon passes for people of normal intelligence once they come to realize a bit later that there really isn’t a better way to do things as long as any laws at all are to be made.
In the real world, the legal system attempts to obviate the basic unfairness of “line-drawing” in a number of ways. These include 1) having multiple categories of gravity for offenses, 2) only prosecuting the more flagrant violations, and 3) having a system of lawyers skilled at making juries see the possibility of grey areas in the law. The result is a system that works on the whole, but which may be a nightmare in any individual case. For of course multiple categories of crime still do not perfectly mirror a continuous world; and sometimes overzealous police or politically motivated prosecutors decide to prosecute violations that are not so flagrant; and finally the presence of grey areas often means that the skill of the lawyer, rather than the guilt of the accused, determines the ultimate verdict.
Part II: Law and Language
The above discussion is meant to prime the reader for the major problem to be discussed in this essay. It is this: there are times when line- drawing is necessary and fair, others where it is necessary and unfair, and still others when it is both unnecessary and unfair but where the fact is not recognized because the lines have been mistaken for reality. The law is a profession dependant upon language, and as intimated earlier, one of the reasons why the utter subjectivity of most law is not more apparent is that the subjectivity of law is well hidden in the nature of language itself.
As noted in the introduction, when speaking about the universe we live in we run immediately into difficulty when describing continuous transformation and change. The very act of labeling a state or an object with a particular word, is equivalent to drawing a mental line around it which some other words dare not cross. When one says “sweet,” or “drunk” or “daytime,” for instance, one is marking out a linguistic territory that has borders, even if those borders are ill-defined ones that may shade into a twilight zone of doubt when examined closely.
The act of “naming” things tends to encourage and foster the practice of putting mental borders on processes and states where there may in fact be none in reality. When this happens, and the map (language) is confused for the territory (reality), the arbitrary lines we draw may be erroneously taken for real. The resulting unedifying semantic debates about such things as whether or not a cup of coffee “really” is sweet, or the man “really” is drunk, or the person “really” is an adult, in the absence of any natural definitions of “sweet,” or “drunk” or “adult,” is one of mankind’s more enduring pastimes and follies. It was a folly recognized in Buddhist philosophy 2,500 years ago, but one which seem destined to be with us forever.
S. I. Hayakawa, famous popular explainer of the study of semantics, has this to say about a related situation:
The habit of trusting one’s definitions. . . is one of the most stubborn remnants of primitivism to affect us. It does not matter if the verbal associations are beautifully systematic, as among the neo -Aristotelian reformers of modern education, or random, as among the uneducated. Words, and whatever words may suggest, are not the things they stand for, and education that fails to emphasize this fact is more than likely to leave students imprisoned and victimized by their linguistic conditioning, rather than enlightened and liberated by it.
To people so imprisoned, it inevitably appears that if certain individuals have a name in common — say “criminals” — they must have the “essential attribute” of “criminality” in common, while “noncriminals,” of course, do not possess that “attribute.” The profound sense that there is something different between people who have been in jail and those who have not is one of the most cherished beliefs both of the respectable rich and the respectable poor. Similarly, as mentioned earlier, Jews are supposed by many to have in common the attribute of “Jewishness,” which distinguishes them from non -Jews. Now what is this “Jewishness”? Define it any way you like — take Hitler’s definition, or anyone else’s — and from that point on it is not necessary to examine Jews. You know what they are like without even looking, because you have what Aristotle called “knowledge of universals,” which “is more precious that sense perceptions and than intuition.”
Hayakawa’s invocation of Aristotle here is in recognition of the ancient idea in philosophy that mental attributes of things (such as man- made classifications) were to be given some of the same sort of respect as the more measurable and continuous attributes such as (for instance) dimension and texture. Aristotle’s “universals” are created by the classificational lines and boundaries which language draws, and the essential question these linguistic boundaries create is always one of how objectively real they are.
Aristotle’s ideas in this regard are actually rather mild in contrast to those of his teacher Plato, who had taken the idea even a step further and decided that the common classificational attributes of objects were to be given all the respect. Plato, in fact, had decided that the attributes of objects were the only reality there was, and that the individual objects themselves were merely shadows or illusions. Thus, for Plato (as an example) no individual table was real, but “tableness” as an ideal essence or attribute, had a real existence. Similarly, for Plato, there were no real horses, but only various imperfect manifestations of an ideal “horsehood,” and so on.
The Roman Catholic Church was eventually to find many of Plato’s philosophical ideas useful. Thus, for example, in the Roman church, individual priests came to be seen as only imperfect manifestations of an ideal “priesthood,” and so on. The early Christian church was also influenced (through early Christian writers such as John) by the philosophy of the Greek Stoics. The Stoic school held that the material universe was pervaded by a kind of ordering “force” (Logos), which was identified with mind, deity, soul, and (most importantly for our discussion) language. Following Platonism, then, many of the idealistic (linguistic) attributes of objects were given a separate metaphysical reality in Christian thought. Following the Stoics, language itself became somewhat deified (“the word” = “God”), and complicated liturgical formulas involving language were held to influence objective reality, such as the “transubstantiation” of sacraments, etc. Formal linguistic “line drawing” ceremonies (“spells”) are important in both religion and magic. In fact, the magician’s “hocus pocus” is really the hoc est corpus of the Roman Catholic eucharist in disguise.
The Law Again
Classical Roman Law (from which our law is derived) was constructed under the influence of certain aspects of Greek philosophy, and therefore contains many Stoic and Platonic ideas. Especially Platonic is Western law’s infatuation with the separate “reality” created by words and labels, such as “intoxicated,” “criminal,” “adult,” and so on.
Both religion and law in Western society have thus acted historically to perpetuate the myth that language and terminology may create some special objective change in the universe. In fact, if one is under the influence of Plato in this fashion (either directly or indirectly) one may be tempted to believe that one’s mental classifications of things are enforced by separate metaphysical characteristics of objects which correspond with the language that one uses.
Some examples of this are needed to illustrate. Let us examine now in detail what kinds of world views this philosophy can lead.
Part III. Putting Lines in Biological Transformations
As a first example, let us begin with a transformational process with which religion and the law must contend. Consider a fertilized human ovum, which has few of the characteristics ordinarily associated with a baby. To call a fertilized ovum a “baby” would be akin to calling a cornerstone and a set of blueprints a “building,” or to calling two teaspoons of soda and a recipe, a “cake.” These are things we do not do. However, it is also true that an average of eight and a half months after conception, a living organism is normally born which is universally regarded by society as a baby and a human being. A smooth and continuous process has happened between these two events of conception and birth. “State A” has been transformed into “state B,” with never a clear dividing line between the two. The cup of coffee has become sweet.
The law, which is zealous about protection of “persons,” and “babies” of course has a problem here. If it is persons (babies) that one wishes to protect under the law, then one is forced to ask an embarrassing question: when is it exactly that the fertilized ovum becomes a baby or a person? At this point, it should be apparent to the reader that the question is essentially a matter of taste, as in the matter of the coffee. However, as also in the example of the coffee, many other approaches to the question have historically been taken by anxious persons and societies with an intolerance of ambiguity.
The Fundamentalist Christian churches, notably, have provided several “answers.” In typical Platonic fashion the early Christians had come to see the “essence” of human beings in metaphysical terms. Aristotle had thought that humans possessed separate “souls” for each of the three linguistic quantities of “life,” “locomotion/animalness,” and “humanity” which he recognized in humans. The later Christians however, under the influence of Greek philosophy and myth, had long since pared the essence of humanity down to just one economical “spirit,” which was thought to not only confer “humanity,” but also personal identity. Accordingly, it seemed natural to assume that a fetus objectively became a “person” (a linguistic term) when it received a human spirit.
Interestingly the word “spirit” is associated with “breath” or “breath of life” in most ancient languages. In Hebrew and Greek, spirit and breath are the same word. [A little known fact is that the historical reason why people say the blessing “Gesundheit” (good health) to sneezers, is it was once thought that a sneezer momentarily blew out his own soul, putting himself temporarily at risk for demonic possession of his untenanted body]. Thus, it seemed natural to equate the drawing of the first breath of life with “personhood.” This attitude prevailed in the early church, and throughout most of the Middle Ages miscarried fetuses which did not draw breath were not even buried in hallowed ground by Roman Catholics, but were simply discarded without a second thought.
Sometime later, of course, after conception was more thoroughly understood and therapeutic abortion became a controversial issue, the Roman Catholic Church announced that “ensoulment” occurred at the precise moment of fertilization. By this time however (and unfortunately for the Catholics church) the world had turned protestant and humanistic. Abortion was outlawed for a time, but eventually in 1973 it became legal everywhere in the United States.
The result of this ruling was literally screams of bloody murder from certain factions, and a history of protest with which the reader will be familiar. Especially loud were protests from American Christian Fundamentalists, a religious category defined by its inability to live with ambiguity, as previously illustrated by its lobbying activities against the theory of evolution. What most disconcerted the antievolutionists in the case of Darwinism, interestingly, was precisely the idea that apelike primates could change gradually into humans, without any clear dividing line between the two. This “Burkean” paradox was too much for the fundamentalists, who craved to know the precise moment when the First Man appeared, and wanted to know his name and address. [Footnote: I’ve been hard on the fundamentalists here, but scientists can be just as silly about drawing linguistic boundaries where there are none in reality. When someone finds a fossil skull with a brain volume of 700 cc’s, for instance, there is enormous pressure to name it Homo something rather than Australopithecus something. After all, who wouldn’t rather be the paleoarcheologist who found the oldest man, instead of just the one who found one more late ape? Archaeologists are forever talking about putting a baseball cap on such and such a creature, and taking it on the subway without having anyone scream; a measure of the crudity of thought-experiment to which one will descend to if the result will allow one to justify using the taxonomic term one wants to use.]
In the United States the law, which was not controlled by the Catholic church, and which lacked a “soul detector,” was at an impasse. To the law, pregnancy was a huge biological grey area, and so the law did what it usually does when faced with a large grey area: it proceeded to draw arbitrary lines. Specifically, in “Roe vs Wade” the U.S. Supreme Court drew legal lines at conception, three months after conception, six months after conception, and birth. During each of the three resultant intervals the State’s interest was defined. The rationale for the six month demarkation was presumably that this was close to the time of theoretical viability outside the womb (and still is not too far away from it even in 1989). The three month line presumably had something to do with the time when any responsible person who was going to have an abortion should have had it already. But in any case, the Court did not explain its reasoning for any of the times given.
In doing this the Court interestingly came in for the same sorts of arguments that most people learn in adolescence to avoid when speaking of law, and such protests serve as a nice illustration of the basic frustration with binary law which lies just under the surface in all of us. Specifically, the three and six month legal lines were denounced by fundamentalists as “arbitrary”(!) — as though arbitrariness were somehow not the nature of all laws when dealing with grey areas in transitional processes. There was also much protest about the fact that the law regarded fetal status as changing completely with the comparatively short process of birth — a protest which under the circumstances was equivalent to protesting the fact that the law regards a person in violation just a few miles per hour over the speed limit, but not a few under. Even more incredibly, this sort of protest was made by people who wished the law to similarly change the legal status of genetic material after the comparatively short process of fertilization — an illustration of the illogical lengths to which people will go when attempting to enforce a metaphysical agenda.
To this point we have discussed issues which have been in the news, and which impinge on cryonicists as citizens, but which do not affect cryonics per se. We now move on to a subject which affects cryonics directly and inescapably.
It is seldom realized that the issue of death is potentially as politically explosive as the issue of abortion. The reason is that many scientifically sophisticated persons now realize that in the case of death we again deal not with an event, but with a smoothly continuous transformation process from state A to state B. Human beings come into existence a little bit at a time, as the abortion issue has taught us. Unfortunately for the long term future peace of mind of cryonicists, humans go out of existence in the same way.
A living organism is a package, or pattern of information. Certain atoms in the package may be changed (replaced) as metabolism goes on, but the organism retains its identity throughout this process, just as (for example) a volume of a novel would retain its identity even if its pages are replaced with photocopies. Today we know that certain living organisms can be dehydrated, or even frozen at nearly absolute zero (processes which stop all metabolism), and yet can still be revived as long as their building pattern is not damaged. “Life” is not metabolism, it is information.
The great difficulty in speaking of the destruction of organisms, is the word “death.” If “life” is information, then “death” may be usefully defined as the complete loss of information. Thus, a man who has been cremated is pretty clearly dead, because the information is gone. But what shall we say about a child who has fallen through the ice on a river and “drowned” an hour ago? Or two hours ago, or ten hours ago? In all of these cases, most of the information is certainly still present, even though heartbeat and respiration have long since ceased. Although only the child in the first instance can be revived with the technology of 1989, to use this fact as part of a supposedly “objective” definition of “death” would be chauvinistic to our present age. A generation ago, after all, none of these children would have been revivable, and there is no reason to think that things will not change again in the future. In fact, we expect that they will change as resuscitation technology improves.
A human who has decayed to a skeleton is dead. There is an absolute objective difference between life and death, then, but the transition between them is ordinarily a slow one, with no clear dividing line. Again we are confronted with the paradox of Burke. And once again in the issue of death we must deal with people who have little tolerance for ambiguity, and who wish to use the institutions of language and religion and law to draw an arbitrary line in a continuous process.
The law’s interest in the matter, of course, is obvious. If a society is unable to define the difference between life and death, it cannot even define the difference between murder and simple mutilation of a corpse. Historically, then, the law has been forced to draw a line in the process of cessation of vital functions, albeit a somewhat arbitrary one. A convenient place to do so up until the mid-twentieth century was at the point when “internal motion” (heartbeat and breathing) ceased, in what physicians of today call “clinical death.” This point was convenient because it not only marked the limit of “viability” (hope of return to normal function), but also because it was associated with the religious connotation of breathing as being associated with the presence of the spirit.
In the middle of the twentieth century, however, things began to become complicated. Doctors learned how to restart hearts with electrical cardioversion, and CPR and heart-lung machines now began to make it possible to maintain persons for variable lengths of time without any intrinsic heart or ventilatory function at all. Worse still, the concept of “brain death” was found inapplicable to acute situations, because it was found that the diagnosis could only be made in retrospect at a time when the brain had already been almost completely destroyed. Brain death thus did not help in line drawing unless people were satisfied with drawing the line well after the fact.
The law did what it could. In California, death was redefined as the “irreversible cessation of circulatory and respiratory function.” Unfortunately the word “irreversible” promised difficulty, since it made the time of death highly variable among individuals whose hearts had stopped, and also because the diagnosis of death in theory could not be made for some time after clinical death without an attempt at resuscitation which was inappropriate for many people (folks with terminal illnesses, etc.) Thus, the California law was widely ignored by physicians, who continued to pronounce people (whom they did not wish to resuscitate) dead when their hearts stopped, just as they had always done.
The danger inherent in the above state of things ought to be apparent to any cryonicist. Although legal lines may be initially drawn somewhat arbitrarily, they do have the virtue of being easily identified and complied with-indeed that is their purpose. But where a line is not clearly drawn in a process which itself is murky, the law becomes worse than useless. The word “irreversible” implies a functional definition of “death” and a functional test. You can’t tell if function is “irreversible” in many instances unless you try to reverse it! If the test (attempted resuscitation) is not done, it is impossible to imagine how one is to tell the legal status of anyone for a very long time after their hearts have stopped. This being the case, the matter of when to prosecute for suspected violations of law in this area would seem to be completely arbitrary.
A well known method of social control is to pass laws with such a structure as to guarantee universal violation, then enforce them selectively against undesirables. Although physicians involved in standard medical practice may never be prosecuted for violation of the law’s new definition of death (though in violation of it every day), it is entirely possible that physicians (and nonphysicians) involved in cryonics may be.
To make things worse, all of the above legal problems are complicated by a religious overlay. Many religions conceive of death as a sharp event which takes place when the soul leaves the body. The religious confusion generated in the last 25 years by the changing technological definition of death is to be gauged by the proliferation of stories about people whose souls left their bodies during clinical death and then were jerked back by resuscitation (one envisions a sort of Platonic/metaphysical elastic paddle-ball for the souls of folks who are resuscitated several times). Thus, many people are convinced that death does occur as an event sometime after the heart stops, and therefore that murder of a person already in cardiac arrest may indeed be theoretically possible. And once the possibility is admitted of a crime for which there are no hard and fast defining criteria, then the way becomes open for prosecution (or persecution) of people who just seem vaguely “up to no good.”
Thus, it might be entirely possible for a jury of believers to convict on the “impression” that someone was “alive” or “dead” when they underwent a given cryonics procedure, in somewhat the same fashion that a Inquisitorial tribunal might have judged persons guilty of heresy in the middle ages.
Part IV. Transitional Ceremonies
All societies have ways of dealing with gradual social changes in which status would otherwise not be clear. Some of these social functions are grouped under the heading of “rites of passage,” and they are often elaborate. An example is the puberty ceremony in many cultures (for example, the bar mitzvah in Jewish culture) in which adolescents are formally accepted into society as adults.
In areas where the status of a social transition or the new status of an individual would not otherwise be immediately clear to the average member of society, ceremonies may be especially ornate. Examples here are marriage ceremonies and award ceremonies of various kinds. Under many circumstances, the ceremony itself often becomes part of the new status of the individual, and one consequently sometimes sees ceremonies performed in this context even when they make little physical sense. For example, one sees empty coffins buried sometimes when missing persons are declared formally dead.
Transition ceremonies and rites of passage are only extensions of our definitional language. They are used to draw lines in continuous processes so as to minimize confusion and anxiety in a society. As in law, they seem to be necessary. Also as in law, however, they become dangerous when the people who perform them come to believe that their words make an objective change in reality. An official pronouncement of marriage by a priest is such a transition ceremony. An official pronouncement of death by a doctor is such a transition ceremony. The danger comes when a society forgets that the one is no more an indication of an objective physical change than the other.
We began this essay with a scene from the 1939 MGM production of The Wizard of Oz. In the scene, Dorothy’s house has come down in The Land of Oz on top of the Wicked Witch of the East, crushing her. The Munchkins are still anxious, however, and they need absolute assurance that the wicked witch is dead by all possible definitions. In the movie this assurance is at last provided by the Munchkin coroner, who draws himself up importantly (he is four feet tall), produces a huge death certificate, and makes the formal pronouncement. The all-important social line must be drawn even in Munchkin Land.
Sad to say, as we look about us here in Southern California in the year 1989, we find that things are not that much different here than in the wildest fantasies of L. Frank Baum. The Dark Ages, we must remember, were only 25 generations ago. We live in a pretty crazy society, still bound up with intellectual baggage from a magical and mystical past — and its way of looking at things is sometimes completely irrational. Our job now is to find ways of dealing with it. A sense of humor is helpful, particularly if one is confronted with a coroner who seems straight from the Land of Oz, or assorted Munchkins who have begun to worry about whether a person whose heart has stopped is only merely dead, or is really most sincerely dead.
Cryonics has passed a threshold of some sort in this past year, and we really aren’t in Kansas anymore. Courage is now required, and brains, and heart. Let us hope that we find that these things were always within us, whether we knew it or not.