Posts Tagged ‘John W. Campbell’
In my recent piece on Longreads about L. Ron Hubbard and the origins of Scientology, I note that Hubbard initially didn’t want the first important article on dianetics to appear in Astounding Science Fiction at all. In April of 1949, he made efforts to reach out to such organizations as the American Psychiatric Association, the American Psychological Association, and the Gerontological Society in Baltimore, and he only turned to the science fiction editor John W. Campbell after all of these earlier attempts had failed. Most of the standard biographies of Hubbard mention this fact, but what isn’t always emphasized is that even Campbell, who became one of Hubbard’s most passionate supporters, didn’t seem all that eager to publish the piece in Astounding. Campbell knew perfectly well that printing this material in a pulp magazine would make it hard for it to be taken seriously, and he was also concerned that it would be mistaken for a hoax article, like Isaac Asimov’s story about the fictional compound thiotimoline. As a result, even as Campbell served as a key member of the team that was developing dianetics in Bay Head, New Jersey, he continued to push for it to make its first appearance in a professional journal. Later that year, Dr. Joseph Winter, their third crucial collaborator, reached out “informally” about a paper to the Journal of the American Medical Association, only to be told that it lacked sufficient evidence, and he got much the same response from the American Journal of Psychiatry. It was only after they had exhausted these avenues that they decided to publish “Dianetics: The Evolution of a Science” in the magazine that Campbell himself edited—which tells us a lot about how they had originally wanted their work to be received.
At that point, Campbell was hardly in a position to be objective, but he wanted to present the article to his readers in a way that at least gave the appearance of balance. Accordingly, he proposed that they find a psychiatrist to write a critical treatment of dianetics, presumably to run alongside Hubbard’s piece—but he was doomed to be disappointed in this, too. On December 9, 1949, Hubbard wrote: “In view of the fact that no psychiatrist to date has been able to look at Dianetics and listen long enough to find out the fundamentals, Dianetic explanations being dinned out by his educational efforts about Freud, we took it upon ourselves to compose the rebuttal.” Incredibly, Hubbard and Winter wrote up an entire article, “A Criticism of Dianetics,” that spent over five thousand words laying out the case against the new therapy, credited to the nonexistent “Irving R. Kutzman, M.D.” (In his letter, Hubbard argued that the “M.D.” was justified, since it reflected the contributions of Winter, a general practitioner and endocrinologist from Michigan.) Hubbard claimed that the essay consisted of the verbatim comments of four psychiatrists he had consulted on the subject, including one he had met while living in Savannah, Georgia, and that he had “played them back very carefully,” using the perfect memory that a dianetic “clear” possessed. He also described setting up “a psychiatric demon” to write the piece, which refers to the notion that a clear can deliberately create and break down temporary delusions for his private amusement. To the best of my knowledge, this paper, which I discovered among Campbell’s correspondence, hasn’t been published or discussed anywhere else, and it provides some fascinating insights into Hubbard’s thinking at the time.
The most interesting thing about “A Criticism of Dianetics” is how straightforward it is. Hubbard told Campbell that “it is in no sense an effort to be funny and it is not funny,” and for most of the piece, there’s little trace of burlesque. Notably, it anticipates many of the objections that would be raised against dianetics, including the idea that it merely repackaged existing psychological concepts. As “Kutzman” writes: “Further examination…disclosed that scraps of Dianetics have been known for thousands of years. Except for one or two relatively minor matters, all of them are known to the modern psychologist.” He also observes that Hubbard has only thirteen months of data—which is actually generous, given how little he disclosed about any of his alleged cases—and that there’s no evidence that any perceived improvements will last. It’s only toward the end that the mask begins to slip. “Kutzman” speaks glowingly of “the new technique of trans-orbital leukotomy and the older and more reliable technique of pre-frontal lobotomy,” with which “patients can be treated more swiftly and will be less of a menace to society than heretofore.” He concludes: “By such operations…[the neurosurgeon] can get rid of that part of your personality which is causing all your trouble.” (Even the name “Kutzman,” I suspect, is a bad pun.) The piece dismisses General Semantics and cybernetics, the latter of which it attributes to a “Dr. Werner [sic],” and closes with an odd account of the fictional Kutzman being audited by Hubbard, in which he explains away the prenatal and childhood memories that he recovered as delusions: “I had eaten excessively at supper and…my ulcer had been troubling me for some time.” It ends: “Discoveries not solidly founded in classical psychoanalysis are not likely to be easily accepted by a social world which already comprehends all the basic problems of the human mind.”
In any event, it was never published, and it isn’t clear whether Hubbard or Winter ever thought that it would be. Hubbard wrote to Campbell: “Any article you receive will, I know, run something on this order if written by a psychiatrist…May I invite you to peruse same, not in any misguided spirit of levity, but as a review of the composite and variously confirmed attitudes Dianetics meets in the field of those great men who guide our minds.” No actual rebuttal ever materialized, and dianetics was presented in the pages of Astounding without any critical analysis whatsoever. (Interestingly, Hubbard did contribute to a point/counterpoint discussion on at least two other occasions. One was in the November 1950 issue of Why Magazine, which ran Hubbard’s “The Case For It” with “The Case Against It” by Dr. Oscar Sachs of Mount Sinai, and the other was in the May 1951 installment of Marvel Science Stories, which contained positive articles on dianetics from Hubbard and Theodore Sturgeon and a critical one from Lester del Rey. Campbell could have arranged for something similar in Astounding, if he had really wanted it.) But it provides a valuable glimpse into a transitional moment in Hubbard’s career. Compared to the author’s later attacks on psychiatry, its tone is restrained, even subtle—which isn’t a description that usually comes to mind for Hubbard’s work. Yet it’s equally clear that he had already given up on reaching mainstream psychologists and psychiatrists, even to the extent of convincing one to compose an objective response. Campbell, for his part, still clung to the hope of obtaining academic or scientific recognition. Much of the tragicomedy of what happened over the next eighteen months emerged from that basic misunderstanding. And the seeds of it are visible here.
Over the last few days, I’ve been doing my best Robert Anton Wilson impression, and, like him, I’ve been seeing hawks everywhere. Science fiction is full of them. Skylark of Space, which is arguably the story that kicked off the whole business in the first place, was written by E.E. Smith and his friend Lee Hawkins Garby, who is one of those women who seem to have largely fallen out of the history of the genre. Then there’s Hawk Carse, the main character of a series of stories, written for Astounding by editors Harry Bates and Desmond W. Hall, that have become synonymous with bad space opera. And you’ve got John W. Campbell himself, who was described as having “hawklike” features by the fan historian Sam Moskowitz, and who once said of his own appearance: “I haven’t got eyes like a hawk, but the nose might serve.” (Campbell also compared his looks to those of The Shadow and, notably, Hermann Göring, an enthusiastic falconer who loved hawks.) It’s all a diverting game, but it gets at a meaningful point. When Wilson’s wife objected to his obsession with the 23 enigma, pointing out that he was just noticing that one number and ignoring everything else, Wilson could only reply: “Of course.” But continued to believe in it as an “intuitive signal” that would guide him in useful directions, as well as an illustration of the credo that guided his entire career:
Our models of “reality” are very small and tidy, the universe of experience is huge and untidy, and no model can ever include all the huge untidiness perceived by uncensored consciousness.
We’re living at a time in which the events of the morning can be spun into two contradictory narratives by early afternoon, so it doesn’t seem all that original to observe that you can draw whatever conclusion you like from a sufficiently rich and random corpus of facts. On some level, all too many mental models come down to looking for hawks, noting their appearances, and publishing a paper about the result. And when you’re talking about something like the history of science fiction, which is an exceptionally messy body of data, it’s easy to find the patterns that you want. You could write an overview of the genre that draws a line from A.E. van Vogt to Alfred Bester to Philip K. Dick that would be just as persuasive and consistent as one that ignores them entirely. The same is true of individuals like Campbell and Heinlein, who, like all of us, contained multitudes. It can be hard to reconcile the Campbell who took part in parapsychological experiments at Duke and was editorializing in the thirties about the existence of telepathy in Unknown with the founder of whatever we want to call Campbellian science fiction, just as it can be difficult to make sense of the contradictory aspects of Heinlein’s personality, which is something I haven’t quite managed to do yet. As Borges writes:
Let us greatly simplify, and imagine that a life consists of 13,000 facts. One of the hypothetical biographies would record the series 11, 22, 33…; another, the series 9, 13, 17, 21…; another, the series 3, 12, 21, 30, 39…A history of a man’s dreams is not inconceivable; another, of the organs of his body; another, of the mistakes he made; another, of all the moments when he thought about the Pyramids; another, of his dealings with the night and the dawn.
It’s impossible to keep all those facts in mind at once, so we make up stories about people that allow us to extrapolate the rest, in a kind of lossy compression. The story of Arthur C. Clarke’s encounter with Uri Geller is striking mostly because it doesn’t fit our image of Clarke as the paradigmatic hard science fiction writer, but of course, he was much more than that.
I’ve been focusing on places where science fiction intersects with the mystical because there’s a perfectly valid history to be written about it, and it’s a thread that tends to be overlooked. But perhaps the most instructive paranormal encounter of all happened to none other than Isaac Asimov. In July 1966, Asimov and his family were spending two weeks at a summer house in Concord, Massachusetts. One evening, his daughter ran into the house shouting: “Daddy, Daddy, a flying saucer! Come look!” Here’s how he describes what happened next:
I rushed out of the house to see…It was a cloudless twilight. The sun had set and the sky was a uniform slate gray, still too light for any stars to be visible; and there, hanging in the sky, like an oversize moon, was a perfect featureless metallic circle of something like aluminum.
I was thunderstruck, and dashed back into the house for my glasses, moaning, “Oh no, this can’t happen to me. This can’t happen to me.” I couldn’t bear the thought that I would have to report something that really looked as though it might conceivably be an extraterrestrial starship.
When Asimov went back outside, the object was still there. It slowly began to turn, becoming gradually more elliptical, until the black markings on its side came into view—and it turned out to be the Goodyear blimp. Asimov writes: “I was incredibly relieved!” Years later, his daughter told the New York Times: “He nearly had a heart attack. He thought he saw his career going down the drain.”
It’s a funny story in itself, but let’s compare it to what Geller writes about Clarke: “Clarke was not there just to scoff. He had wanted things to happen. He just wanted to be completely convinced that everything was legitimate.” The italics are mine. Asimov, alone of all the writers I’ve mentioned, never had any interest in the paranormal, and he remained a consistent skeptic throughout his life. As a result, unlike the others, he was very rarely wrong. But I have a hunch that it’s also part of the reason why he sometimes seems like the most limited of all major science fiction writers—undeniably great within a narrow range—while simultaneously the most important to the culture as a whole. Asimov became the most famous writer the genre has ever seen because you could basically trust him: it was his nonfiction, not his fiction, that endeared him to the public, and his status as a explainer depended on maintaining an appearance of unruffled rationality. It allowed him to assume a very different role than Campbell, who manifestly couldn’t be trusted on numerous issues, or even Heinlein, who convinced a lot of people to believe him while alienating countless others. But just as W.B. Yeats drew on his occult beliefs as a sort of battery to drive his poetry, Campbell and Heinlein were able to go places where Asimov politely declined to follow, simply because he had so much invested in not being wrong. Asimov was always able to tell the difference between a hawk and a handsaw, no matter which way the wind was blowing, and in some ways, he’s the best model for most of us to emulate. But it’s hard to write science fiction, or to live in it, without seeing patterns that may or may not be there.
I am but mad north-north-west. When the wind is southerly, I know a hawk from a handsaw.
In the summer of 1974, the Israeli magician and purported psychic Uri Geller arrived at Birkbeck College in Bloomsbury, London, where the physicist David Bohm planned to subject him to a series of tests. Two of the scheduled observers were the writers Arthur Koestler and Arthur C. Clarke, of whom Geller writes in his autobiography:
Arthur Clarke…would be particularly important because he was highly skeptical of anything paranormal. His position was that his books, like 2001 and Childhood’s End, were pure science fiction, and it would be highly unlikely that any of their fantasies would come true, at least in his own lifetime.
Geller met the group in a conference room, where Koestler was cordial, although, Geller says, “I sensed that I really wasn’t getting through to Arthur C. Clarke.” A demonstration seemed to be in order, so Geller asked Clarke to hold one of his own housekeys in one hand, watching it closely to make sure that it wasn’t being swapped out, handled, or subjected to any trickery. Sure enough, the key began to bend. Clarke cried out, in what I like to think was an inadvertent echo of one of his most famous stories: “My God, my eyes are seeing it! It’s bending!”
Geller went on to display his talents in a number of other ways, including forcing a Geiger counter to click at an accelerated rate merely by concentrating on it. (It has been suggested by the skeptic James Randi that Geller had a magnet taped to his leg.) “By that time,” Geller writes, “Arthur Clarke seemed to have lost all his skepticism. He said something like, “My God! It’s all coming true! This is what I wrote about in Childhood’s End. I can’t believe it.” Geller continues:
Clarke was not there just to scoff. He had wanted things to happen. He just wanted to be completely convinced that everything was legitimate. When he saw that it was, he told the others: “Look, the magicians and the journalists who are knocking this better put up or shut up now. Unless they can repeat the same things Geller is doing under the same rigidly controlled conditions, they have nothing further to say.”
Clarke also told him about the plot of Childhood’s End, which Geller evidently hadn’t read: “It involves a UFO that is hovering over the earth and controlling it. He had written the book about twenty years ago. He said that, after being a total skeptic about these things, his mind had really been changed by observing these experiments.”
It’s tempting to think that Geller is exaggerating the extent of the author’s astonishment, but here’s what Clarke himself wrote about it:
Although it’s hard to focus on that hectic and confusing day at Birkbeck College in 1974…I suspect that Uri Geller’s account in My Story is all too accurate…In view of the chaos at the hastily arranged Birkbeck encounter, the phrase “rigidly controlled conditions” is hilarious. But that last sentence is right on target, for [the reproduction of Geller’s effects by stage magicians] is precisely what happened…Nevertheless, I must confess a sneaking fondness for Uri; though he left a trail of bent cutlery and fractured reputations round the world, he provided much-needed entertainment at a troubled and unhappy time.
Geller has largely faded from the public consciousness, but Clarke—who continued to believe long afterward that paranormal phenomena “can’t all be nonsense”—wasn’t the only science fiction writer to be intrigued by him. Robert Anton Wilson, one of my intellectual heroes, discusses him at length in the book Cosmic Trigger, in which he recounts the strange experience of his friend Saul-Paul Sirag. The year before the Birkbeck tests, Sirag was speaking to Geller when he saw the other man’s head turn into a “bird of prey,” like a hawk: “His nose became a beak, and his entire head sprouted feathers, down to his neck and shoulders.” (Sirag was also taking LSD at the time, which Wilson neglects to mention.) The hawk, Sirag thought, was the form assumed by an extraterrestrial intelligence that was allegedly in contact with Geller, and he didn’t know then that it had appeared in the same shape to two other men, including a psychic named Ray Stanford and another who had nicknamed it “Horus,” after the Egyptian god with a hawk’s head.
It gets weirder. A few months later, Sirag saw the January 1974 issue of Analog, which featured the story “The Horus Errand” by William E. Cochrane. The cover illustration depicted a man wearing a hawklike helmet, with the name “Stanford” written over his breast pocket. According to one of Sirag’s friends, the occultist Alan Vaughan, the character even looked a little like Ray Stanford—and you can judge the resemblance for yourself. Vaughan was interested enough to write to the artist, the legendary Kelly Freas, for more information. (Freas, incidentally, was close friends with John W. Campbell, to the point where Campbell even asked him to serve as the guardian for his two daughters if anything ever happened to him or his wife.) Freas replied that he had never met Stanford in person or knew how he looked, but that he had once received a psychic consultation from him by mail, in which Stanford said that “Freas had been some sort of illustrator in a past life in ancient Egypt.” As a result, Freas began to employ Egyptian imagery more consciously in his work, and the design of the helmet on the cover was entirely his own, without any reference to the story. At that point, the whole thing kind of peters out, aside from serving as an example of the kind of absurd coincidence that was so close to Wilson’s heart. But the intersection of Arthur C. Clarke, Uri Geller, and Robert Anton Wilson at that particular moment in time is a striking one, and it points toward an important thread in the history of science fiction that tends to be overlooked or ignored. Tomorrow, I’ll be writing more about what it all means, along with a few other ominous hawks.
Patti Smith once lost her favorite coat. As the singer-songwriter relates in her memoir M Train, it was an old black coat that had been given to her by a friend, off his own back, as a present on her fifty-seventh birthday. It was worn and riddled with holes, but whenever she put it on, she felt like herself. Then she began wearing another coat during a particularly cold winter, and the other one went missing forever:
I called out but heard nothing; crisscrossing wavelengths obscured any hope of feeling out its whereabouts. That’s the way it is sometimes with the hearing and the calling. Abraham heard the demanding call of the Lord. Jane Eyre heard the beseeching cries of Mr. Rochester. But I was deaf to my coat. Most likely it had been carelessly flung on a mound with wheels rolling far away toward the Valley of the Lost.
The Valley of the Lost, as Smith explains, is the “half-dimensional place where things just disappear,” where she imagines her coat “on a random mound being picked over by desperate urchins.” Smith concludes: “The valley is softer, more silent than purgatory, a kind of benevolent holding center.” It’s an image that first appears in Dot and Tot of Merryland by L. Frank Baum, who describes the Valley of Lost Things as “covered with thousands and thousands of pins…A great pyramid of thimbles, of all sizes and made of many different materials. Further on were piles of buttons, of all shapes and colors imaginable, and there were also vast collections of hairpins, rings, and many sorts of jewelry…A mammoth heap of lead pencils, some short and stubby and worn, and others long and almost new.”
I encountered the story of the black coat in the recent wonderful essay “When Things Go Missing” by Kathryn Schulz in The New Yorker, in which she, like Smith, uses the disappearance of physical objects as an entry point for exploring other kinds of loss. After a very funny opening in which she discusses a short period in which she lost her car keys, her wallet, and her friend’s pickup truck, she provides a roundup of the extant advice on finding lost items, including the “suspect” rule that states that most objects are less than two feet from where you think you left them. As it happens, I’m familiar with that rule, which appears in How to Find Lost Objects by Professor Solomon, which I’ve quoted here before. Personally, I like his idea of the Eureka Zone, the eighteen-inch radius that he recommends we measure with a ruler and then explore meticulously. It’s a codification of the practical insight that our mistakes rarely travel far from their point of origin. Joe Armstrong, the creator of the programming language Erlang, makes a similar point in the book Coders at Work:
Then there’s—I don’t know if I read it somewhere or if I invented it myself—Joe’s Law of Debugging, which is that all errors will be plus/minus three statements of the place where you last changed the program…It’s the same everywhere. You fix your car and it goes wrong—it’s the last thing you did. You changed something—you just have to remember what it was. It’s true with everything.
By this logic, the Valley of Lost Things is all around us, and we’re wandering through it with various degrees of incomprehension. As Daniel Boone is supposed to have said: “I have never been lost, but I will admit to being confused for several weeks.”
I’ve been thinking of the loss and retrieval of objects a lot recently, in my unexpected role as biographer and amateur archivist. When I began my research for Astounding, I had to start by recovering countless scraps of information that must once have seemed obvious. Even something as basic as the number and names of John W. Campbell’s children turned out to be hard to verify, and there are equally immense facts, like how he met his first wife, that seem to have vanished into the Valley of Lost Things forever. (Not even his own daughter knows the answer to that last one.) I also have thousands of seemingly minor details that I hope to assemble into some kind of portrait, and they’re vulnerable to loss as well. I’ve spoken before about the challenge of keeping my notes straight, and how I’ve basically resorted to throwing everything into four huge text files and trusting in its searchability. Mostly, it works, but sometimes it doesn’t. During the editing process for my Longreads article on L. Ron Hubbard, a very diligent fact checker sent me questions about more than fifty individual statements, for which I had to dig up citations or revise the language for accuracy. I was able to find just about everything he mentioned, but one detail—about Hubbard’s hair, of all things—was frustratingly elusive, and it had to come out. Similarly, as I work on the book, I’ll occasionally come across a statement in my notes that I can’t find in my sources, and I have no idea where it came from. This has only happened once or twice, but whenever it does, it feels as if I’ve carelessly let something slip back into the Valley of the Lost, and I’ve let my subject down.
But as Proust knew, it’s in the search for lost things, however trivial, that we also find deeper meaning. As a biographer, I’m haunted by Borges’s devastating putdown: “One life of Poe consists of seven hundred octavo pages; the author, fascinated by changes of residence, barely manages one parenthesis for the Maelstrom or the cosmogony of ‘Eureka.’” I’ve often found myself obsessed by exactly those “changes of residence,” but it’s only in the accumulation of such material that the big picture starts to emerge, and the search often means more than the goal. If there’s one thing I’ve learned along the way, it’s that a dead end almost always turns into a doorway. Whenever I’ve had to deal with a frustrating absence of of information, it invariably becomes a blessing, because it forces me to talk to real people and leave my comfort zone to find what I need, which never would have happened if it had been there for the taking. The most beautiful description I’ve found of the Valley of Lost Objects is in The Book of the Damned by Charles Fort, who calls it the Super-Sargasso Sea:
Derelicts, rubbish, old cargoes from interplanetary wrecks; things cast out into what is called space by convulsions of other planets, things from the times of the Alexanders, Caesars and Napoleons of Mars and Jupiter and Neptune; things raised by this earth’s cyclones: horses and barns and elephants and flies and dodoes, moas, and pterodactyls; leaves from modern trees and leaves of the Carboniferous era—all, however, tending to disintegrate into homogeneous-looking muds or dusts, red or black or yellow—treasure-troves for the paleontologists and for the archaeologists—accumulations of centuries—cyclones of Egypt, Greece, and Assyria—fishes dried and hard, there a short time: others there long enough to putrefy.
As Baum notes, however, it’s mostly pins. The paleontologists, archeologists, and biographers comb through it, like “desperate urchins,” and pins are usually all we find. But occasionally there’s a jewel. Or even a beloved coat.
A few weeks ago, The New Yorker published a fascinating article by Evan Osnos on the growing survivalist movement among the very rich. Osnos quotes an unnamed source who estimates that fifty percent of Silicon Valley billionaires have some kind of survival plan in place—an estimate that strikes me, if anything, as a little too low. (As one hedge fund manager is supposed to have said: “What’s the percentage chance that Trump is actually a fascist dictator? Maybe it’s low, but the expected value of having an escape hatch is pretty high.”) Osnos also pays a visit to the Survival Condo Project, a former missile silo near Wichita, Kansas that has been converted into a luxury underground bunker. It includes twelve private apartments, all of which have already been sold, and which prospective residents can decorate to their personal tastes:
We stopped in a condo. Nine-foot ceilings, Wolf range, gas fireplace. “This guy wanted to have a fireplace from his home state”—Connecticut—“so he shipped me the granite,” [developer Larry] Hall said. Another owner, with a home in Bermuda, ordered the walls of his bunker-condo painted in island pastels—orange, green, yellow—but, in close quarters, he found it oppressive. His decorator had to come fix it.
Osnos adds: “The condo walls are fitted with L.E.D. ‘windows’ that show a live video of the prairie above the silo. Owners can opt instead for pine forests or other vistas. One prospective resident from New York City wanted video of Central Park.”
As I read the article’s description of tastefully appointed bunkers with fake windows, it occurred to me that there’s a word that perfectly sums up most forms of survivalism, from the backwoods prepper to the wealthy venture capitalist with a retreat in New Zealand. It’s kitsch. We tend to associate the concept of kitsch with cheapness or tackiness, but on a deeper level, it’s really about providing a superficial emotional release while closing off the possibility of meaningful thought. It offers us sentimental illusions, built on clichés, in the place of real feeling. As the philosopher Roger Scruton has said: “Kitsch is fake art, expressing fake emotions, whose purpose is to deceive the consumer into thinking he feels something deep and serious.” Even more relevant is Milan Kundera’s unforgettable exploration of the subject in The Unbearable Lightness of Being, in which he observes that kitsch is the defining art form of the totalitarian state and concludes: “Kitsch is the absolute denial of shit, in both the literal and the figurative senses of the word; kitsch excludes everything from its purview which is essentially unacceptable in human existence.” This might seem like an odd way to characterize survivalism, which is supposedly a confrontation with the unthinkable, but it’s actually a perfect description. The underling premise of survivalism is that by stocking up on beans and bullets, you can make your existence after the collapse of civilization more tolerable, even pleasant, in the face of all evidence to the contrary. It’s a denial of shit on the most fundamental level, in which a nuclear war causing the incendiary deaths of millions is sentimentalized into a playground for the competent man. And, like all kitsch, it provides a comforting daydream that allows its adherents to avoid more important questions of collective survival.
Survivalism has often been dismissed as a form of consumerism, an excuse to play Rambo with expensive guns and toys, but it also embodies a perverse form of nostalgia. The survivalist mindset is usually traced back to the Cold War, in which schoolchildren were trained to duck and cover in their classrooms while the government encouraged their parents to build fallout shelters, and it came into its own as a movement during the hyperinflation and oil shortages of the seventies. In fact, the impulse goes back at least to the days after Pearl Harbor, when an attack on the East or West Coasts seemed like a genuine possibility, leading to blackout drills, volunteer air wardens, and advice on how to prepare for the worst at home. (I have a letter from John W. Campbell to Robert A. Heinlein dated December 12, 1941, in which he talks about turning his basement into a bomb shelter, complete with porch furniture and a lamp powered by a car battery, and coldly evaluates the odds of an air raid being directed at his neighborhood in New Jersey.) It’s significant that World War II was the last conflict in which the prospect of a conventional invasion of the United States—and the practical measures that one would take to prepare for it—was even halfway plausible. Faced with the possibility of the war coming to American shores, households took precautions that were basically reasonable, even if they amounted to a form of wishful thinking. And what’s horrifying is how quickly the same assumptions were channeled toward a nuclear war, an utterly different kind of event that makes nonsense of individual preparations. Survivalism is a type of kitsch that looks back fondly to the times in which a war in the developed world could be fought on a human scale, rather than as an impersonal cataclysm in which the actions of ordinary men and women were rendered wholly meaningless.
Like most kinds of kitsch, survivalism reaches its nadir of tastelessness among the nouveau riche, who have the resources to indulge themselves in ways that most of us can’t afford. (Paul Fussell, in his wonderful book Class, speculated that the American bathroom is the place where the working classes express the fantasy of “What I’d Do If I Were Really Rich,” and you could say much the same thing about a fallout shelter, which is basically a bathroom with cots and canned goods.) And it makes it possible to postpone an uncomfortable confrontation with the real issues. In his article, Osnos interviews one of my heroes, the Whole Earth Catalog founder Stewart Brand, who gets at the heart of the problem:
[Brand] sees risks in escapism. As Americans withdraw into smaller circles of experience, we jeopardize the “larger circle of empathy,” he said, the search for solutions to shared problems. “The easy question is, How do I protect me and mine? The more interesting question is, What if civilization actually manages continuity as well as it has managed it for the past few centuries? What do we do if it just keeps on chugging?”
Survivalism ignores these questions, and it also makes it possible for someone like Peter Thiel, who has the ultimate insurance policy in the form of a New Zealand citizenship, to endorse an experiment in which millions of the less fortunate face the literal loss of their insurance. But we shouldn’t be surprised. When you look at the measures that many survivalists take, you find that they aren’t afraid of the bomb, but of other Americans—the looters, the rioters, and the leeches whom they expect to descend after the grid goes down. There’s nothing wrong with making rational preparations for disaster. But it’s only a short step from survival kits to survival kitsch.
“Jean Renoir once suggested that most true creators have only one idea and spend their lives reworking it,” the director Peter Greenaway said in an interview a quarter of a century ago. “But then very rapidly he added that most people don’t have any ideas at all, so one idea is pretty amazing.” I haven’t been able to find the original version of this quote, but it remains true enough even if we attribute it to Greenaway himself, who might otherwise not seem to have much in common with Renoir. Over time, I’ve come to sympathize with the notion that the important thing for an artist is to have an idea, as long as it’s a good one. This wasn’t always how I felt. In college, I was deeply impressed by Isaiah Berlin’s The Hedgehog and the Fox, in which he drew a famous contrast between writers who are hedgehogs, with one overarching obsession that they pursue for all their lives, and the foxes who move restlessly from one idea to another. (Berlin took his inspiration from a fragment of Archilochus—“The fox knows many things, but the hedgehog knows one big thing”—which may mean nothing more than the fact that the fox, for all its cleverness, is ultimately defeated by the hedgehog’s one good defense of rolling itself into a ball.) My natural loyalty at the time was to such foxes as Shakespeare, Joyce, and Pushkin, as much as I came to love such hedgehogs as Dante and Proust. That’s probably how it should be at twenty, when most of us, as Berlin writes, “lead lives, perform acts, and entertain ideas that are centrifugal rather than centripetal.”
Even at the time, however, I sensed that there was a difference between a truly omnivorous intelligence and the simple inability to make up one’s mind. And as I’ve grown older, I’ve begun to feel more respect for the hedgehogs. (It’s worth noting, by the way, that this classification really only makes sense when applied to exceptional creative geniuses. For the rest of us, identifying as a fox is more likely to become an excuse for a lack of fixed ideas, while a hedgehog’s perspective can become indistinguishable from tunnel vision. I’m neither a hedgehog nor a fox, and neither are you—we’re just trying to muddle along and make sense of the world as best as we can.) It takes courage to devote your entire career to a single idea, more so, in many ways, than building it around the act of creation itself. Neither approach is inherently better than the other, but both have their associated pitfalls. When you stick to one idea, you run the obvious risk of being unable to change your mind even if you’re wrong, and of distorting the evidence around you to fit your preconceived notions. But the danger of throwing in your lot with process is no less real. It can result in the sort of empty technical facility that levels all values until they become indistinguishable, and it can lead you astray just as surely as a fixation on a single argument can. These wrong turns may last just a year or two, rather than a lifetime, but a life made up of twenty dead ends in succession isn’t that much different than one spent tunneling for decades in the wrong direction. You wind up repeating the same behaviors in an endless cycle of tiny variations, and if it were a movie, you could call it Hedgehog Day.
I don’t mean to denigrate the acquisition of technical experience, which is a difficult and honorable calling in itself. But it’s necessary to remember that once we become competent in any art, the skills that we’ve acquired are largely fungible, and we become part of a stratum of practitioners who are mostly interchangeable with others at the same level. You can see this most clearly in the movies, which is the medium in which financial and market pressures tend to equalize talent the most ruthlessly. It’s rare to see a film these days that isn’t shot, lit, mixed, and scored with a high degree of proficiency, simply because the competition within those fields is so intense, and based solely on ability, that any movie with a reasonable budget can get excellent craftspeople to fill those roles. It’s in the underlying idea and its execution that films tend to fall short. (There are countless examples, but the one that has been on my mind the most is Batman v. Superman. There’s a perfectly legitimate story that could be told by a film of that title—in which Superman stands for unyielding law and order and Batman represents a more ambiguous form of vigilante justice—but the movie, for whatever reason, declines to use it. Instead, it tries to graft its showdown onto the alien messiah narrative of Man of Steel, which isn’t a bad concept in itself: it just happens to be fundamentally incompatible with the ethical conflict between these two superheroes. Zack Snyder has a great eye, the cast is excellent, and the technical elements are all exquisite. But it’s a movie so misconceived that it could only have been saved by throwing out the entire script and starting again.)
Good ideas, as I’ve often said before, are cheap, but the ones worthy of fueling a great novel or movie or even a lifetime are indescribably precious, and the whole point of developing technical proficiency is to defend those ideas from those who would destroy them, even inadvertently. There’s a reason why screenwriting is the one aspect of filmmaking that doesn’t seem to have advanced at all over the last century. It’s because most studio executives wouldn’t dream of trying to interfere with sound mixing, lighting, or cinematography, but they also believe that their story ideas are as good as anyone else’s. This attitude is particularly stark in the movies, but it’s present in almost any field where ideas are evaluated less on their own merits than on their convenience to the structures that are already in place. We claim to value ideas, but we’re all too willing to drop or ignore uncomfortable truths, or, even more damagingly, to quietly replace them with their counterfeit equivalents. Even a hedgehog needs to be something of a fox to keep an idea alive in the face of all the forces that would oppose it or kill it with indifference. Not every belief is worth fighting or dying for, and history is full of otherwise capable men and women—John W. Campbell among them—who sacrificed their reputations on the altar of an unexamined idea. We need to be willing to change course in light of new evidence and to be as crafty as Odysseus to find our way home. But all that cleverness and tenacity and tactical brilliance become worthless if they aren’t given shape by a clear vision, even if it’s a modest one. Not all of us can be hedgehogs or foxes. But we can’t afford to be ostriches, either.
Maybe if I’m part of that mob, I can help steer it in wise directions.
—Homer Simpson, “Whacking Day”
Yesterday, Tesla founder Elon Musk defended his decision to remain on President Trump’s economic advisory council, stating on Twitter: “My goals are to accelerate the world’s transition to sustainable energy and to help make humanity a multi-planet civilization.” A few weeks earlier, Peter Thiel, another member of the PayPal mafia and one of Trump’s most prominent defenders, said obscurely to the New York Times: “Even if there are aspects of Trump that are retro and that seem to be going back to the past, I think a lot of people want to go back to a past that was futuristic—The Jetsons, Star Trek. They’re dated but futuristic.” Musk and Thiel both tend to speak using the language of science fiction, in part because it’s the idiom that they know best. Musk includes Asimov’s Foundation series among his favorite books, and he’s a recipient of the Heinlein Prize for accomplishments in commercial space activities. Thiel is a major voice in the transhumanist movement, and he’s underwritten so much research into seasteading that I’m indebted to him for practically all the technical background of my novella “The Proving Ground.” As Thiel said to The New Yorker several years ago, in words that have a somewhat different ring today:
One way you can describe the collapse of the idea of the future is the collapse of science fiction. Now it’s either about technology that doesn’t work or about technology that’s used in bad ways. The anthology of the top twenty-five sci-fi stories in 1970 was, like, “Me and my friend the robot went for a walk on the moon,” and in 2008 it was, like, “The galaxy is run by a fundamentalist Islamic confederacy, and there are people who are hunting planets and killing them for fun.”
Despite their shared origins at PayPal, Musk and Thiel aren’t exactly equivalent here: Musk has been open about his misgivings toward Trump’s policy on refugees, while Thiel, who seems to have little choice but to double down, had a spokesperson issue the bland statement: “Peter doesn’t support a religious test, and the administration has not imposed one.” Yet it’s still striking to see two of our most visible futurists staking their legacies on a relationship with Trump, even if they’re coming at it from different angles. As far as Musk is concerned, I don’t agree with his reasoning, but I understand it. His decision to serve in an advisory capacity to Trump seems to come down to his relative weighting of two factors, which aren’t mutually exclusive, but are at least inversely proportional. The first is the possibility that his presence will allow him to give advice that will affect policy decisions to some incremental but nontrivial extent. It’s better, this argument runs, to provide a reasonable voice than to allow Trump to be surrounded by nothing but manipulative Wormtongues. The second possibility is that his involvement with the administration will somehow legitimize or enable its policies, and that this risk far exceeds his slight chance of influencing the outcome. It’s a judgment call, and you can assign whatever values you like to those two scenarios. Musk has clearly thought long and hard about it. But I’ll just say that if it turns out that there’s even the tiniest chance that an occasional meeting with Musk—who will be sharing the table with eighteen others—could possibly outweigh the constant presence of Steve Bannon, a Republican congressional majority, and millions of angry constituents in any meaningful way, I’ll eat my copy of the Foundation trilogy.
Musk’s belief that his presence on the advisory council might have an impact on a president who has zero incentive to appeal to anyone but his own supporters is a form of magical thinking. In a way, though, I’m not surprised, and it’s possible that everything I admire in Musk is inseparable from the delusion that underlies this decision. Whatever you might think of them personally, Musk and Thiel are undoubtedly imaginative. In his New Yorker profile, Thiel blamed many of this country’s problems on “a failure of imagination,” and his nostalgia for vintage science fiction is rooted in a longing for the grand gestures that it embodied: the flying car, the seastead, the space colony. Achieving such goals requires not only vision, but a kind of childlike stubbornness that chases a vanishingly small chance of success in the face of all evidence to the contrary. What makes Musk and Thiel so fascinating is their shared determination to take a fortune built on something as prosaic as an online payments system and to turn it into a spaceship. So far, Musk has been much more successful at translating his dreams into reality, and Thiel’s greatest triumph to date has been the destruction of Gawker Media. But they’ve both seen their gambles pay off to an extent that might mislead them about their ability to make it happen again. It’s this sort of indispensable naïveté that underlies Musk’s faith in his ability to nudge Trump in the right direction, and, on a more sinister level, Thiel’s eagerness to convince us to sign up for a grand experiment with high volatility in both directions—even if most of us don’t have the option of fleeing to New Zealand if it all goes up in flames.
This willingness to submit involuntary test subjects to a hazardous cultural project isn’t unique to science fiction fans. It’s the same attitude that led Norman Mailer, when asked about his support of the killer Jack Henry Abbott, to state: “I’m willing to gamble with a portion of society to save this man’s talent. I am saying that culture is worth a little risk.” (And it’s worth remembering that the man whom Abbott stabbed to death, Richard Adan, was the son of Cuban immigrants.) But when Thiel advised us before the election not to take Trump “literally,” it felt like a symptom of the suspension of disbelief that both science fiction writers and startup founders have to cultivate:
I think a lot of the voters who vote for Trump take Trump seriously but not literally. And so when they hear things like the Muslim comment or the wall comment or things like that, the question is not “Are you going to build a wall like the Great Wall of China?” or, you know, “How exactly are you going to enforce these tests?” What they hear is “We’re going to have a saner, more sensible immigration policy.”
We’ll see how that works out. But in the meantime, the analogy to L. Ron Hubbard is a useful one. Plenty of science fiction writers, including John W. Campbell, A.E. van Vogt, and Theodore Sturgeon, were persuaded by dianetics, in part because it struck them as a risky idea with an unlimited upside. Yet whatever psychological benefits dianetics provided—and it probably wasn’t any less effective than many forms of talk therapy—were far outweighed by the damage that Hubbard and his followers inflicted. It might help to mentally replace the name “Trump” with “Hubbard” whenever an ethical choice needs to be made. What would it mean to take Hubbard “seriously, but not literally?” And if Hubbard asked you to join his board of advisors, would it seem likely that you could have a positive influence, even if it meant adding your name to the advisory council of the Church of Scientology? Or would it make more sense to invest the same energy into helping those whose lives the church was destroying?