Posts Tagged ‘L. Ron Hubbard’
Over the last few days, I’ve been doing my best Robert Anton Wilson impression, and, like him, I’ve been seeing hawks everywhere. Science fiction is full of them. Skylark of Space, which is arguably the story that kicked off the whole business in the first place, was written by E.E. Smith and his friend Lee Hawkins Garby, who is one of those women who seem to have largely fallen out of the history of the genre. Then there’s Hawk Carse, the main character of a series of stories, written for Astounding by editors Harry Bates and Desmond W. Hall, that have become synonymous with bad space opera. And you’ve got John W. Campbell himself, who was described as having “hawklike” features by the fan historian Sam Moskowitz, and who once said of his own appearance: “I haven’t got eyes like a hawk, but the nose might serve.” (Campbell also compared his looks to those of The Shadow and, notably, Hermann Göring, an enthusiastic falconer who loved hawks.) It’s all a diverting game, but it gets at a meaningful point. When Wilson’s wife objected to his obsession with the 23 enigma, pointing out that he was just noticing that one number and ignoring everything else, Wilson could only reply: “Of course.” But continued to believe in it as an “intuitive signal” that would guide him in useful directions, as well as an illustration of the credo that guided his entire career:
Our models of “reality” are very small and tidy, the universe of experience is huge and untidy, and no model can ever include all the huge untidiness perceived by uncensored consciousness.
We’re living at a time in which the events of the morning can be spun into two contradictory narratives by early afternoon, so it doesn’t seem all that original to observe that you can draw whatever conclusion you like from a sufficiently rich and random corpus of facts. On some level, all too many mental models come down to looking for hawks, noting their appearances, and publishing a paper about the result. And when you’re talking about something like the history of science fiction, which is an exceptionally messy body of data, it’s easy to find the patterns that you want. You could write an overview of the genre that draws a line from A.E. van Vogt to Alfred Bester to Philip K. Dick that would be just as persuasive and consistent as one that ignores them entirely. The same is true of individuals like Campbell and Heinlein, who, like all of us, contained multitudes. It can be hard to reconcile the Campbell who took part in parapsychological experiments at Duke and was editorializing in the thirties about the existence of telepathy in Unknown with the founder of whatever we want to call Campbellian science fiction, just as it can be difficult to make sense of the contradictory aspects of Heinlein’s personality, which is something I haven’t quite managed to do yet. As Borges writes:
Let us greatly simplify, and imagine that a life consists of 13,000 facts. One of the hypothetical biographies would record the series 11, 22, 33…; another, the series 9, 13, 17, 21…; another, the series 3, 12, 21, 30, 39…A history of a man’s dreams is not inconceivable; another, of the organs of his body; another, of the mistakes he made; another, of all the moments when he thought about the Pyramids; another, of his dealings with the night and the dawn.
It’s impossible to keep all those facts in mind at once, so we make up stories about people that allow us to extrapolate the rest, in a kind of lossy compression. The story of Arthur C. Clarke’s encounter with Uri Geller is striking mostly because it doesn’t fit our image of Clarke as the paradigmatic hard science fiction writer, but of course, he was much more than that.
I’ve been focusing on places where science fiction intersects with the mystical because there’s a perfectly valid history to be written about it, and it’s a thread that tends to be overlooked. But perhaps the most instructive paranormal encounter of all happened to none other than Isaac Asimov. In July 1966, Asimov and his family were spending two weeks at a summer house in Concord, Massachusetts. One evening, his daughter ran into the house shouting: “Daddy, Daddy, a flying saucer! Come look!” Here’s how he describes what happened next:
I rushed out of the house to see…It was a cloudless twilight. The sun had set and the sky was a uniform slate gray, still too light for any stars to be visible; and there, hanging in the sky, like an oversize moon, was a perfect featureless metallic circle of something like aluminum.
I was thunderstruck, and dashed back into the house for my glasses, moaning, “Oh no, this can’t happen to me. This can’t happen to me.” I couldn’t bear the thought that I would have to report something that really looked as though it might conceivably be an extraterrestrial starship.
When Asimov went back outside, the object was still there. It slowly began to turn, becoming gradually more elliptical, until the black markings on its side came into view—and it turned out to be the Goodyear blimp. Asimov writes: “I was incredibly relieved!” Years later, his daughter told the New York Times: “He nearly had a heart attack. He thought he saw his career going down the drain.”
It’s a funny story in itself, but let’s compare it to what Geller writes about Clarke: “Clarke was not there just to scoff. He had wanted things to happen. He just wanted to be completely convinced that everything was legitimate.” The italics are mine. Asimov, alone of all the writers I’ve mentioned, never had any interest in the paranormal, and he remained a consistent skeptic throughout his life. As a result, unlike the others, he was very rarely wrong. But I have a hunch that it’s also part of the reason why he sometimes seems like the most limited of all major science fiction writers—undeniably great within a narrow range—while simultaneously the most important to the culture as a whole. Asimov became the most famous writer the genre has ever seen because you could basically trust him: it was his nonfiction, not his fiction, that endeared him to the public, and his status as a explainer depended on maintaining an appearance of unruffled rationality. It allowed him to assume a very different role than Campbell, who manifestly couldn’t be trusted on numerous issues, or even Heinlein, who convinced a lot of people to believe him while alienating countless others. But just as W.B. Yeats drew on his occult beliefs as a sort of battery to drive his poetry, Campbell and Heinlein were able to go places where Asimov politely declined to follow, simply because he had so much invested in not being wrong. Asimov was always able to tell the difference between a hawk and a handsaw, no matter which way the wind was blowing, and in some ways, he’s the best model for most of us to emulate. But it’s hard to write science fiction, or to live in it, without seeing patterns that may or may not be there.
Patti Smith once lost her favorite coat. As the singer-songwriter relates in her memoir M Train, it was an old black coat that had been given to her by a friend, off his own back, as a present on her fifty-seventh birthday. It was worn and riddled with holes, but whenever she put it on, she felt like herself. Then she began wearing another coat during a particularly cold winter, and the other one went missing forever:
I called out but heard nothing; crisscrossing wavelengths obscured any hope of feeling out its whereabouts. That’s the way it is sometimes with the hearing and the calling. Abraham heard the demanding call of the Lord. Jane Eyre heard the beseeching cries of Mr. Rochester. But I was deaf to my coat. Most likely it had been carelessly flung on a mound with wheels rolling far away toward the Valley of the Lost.
The Valley of the Lost, as Smith explains, is the “half-dimensional place where things just disappear,” where she imagines her coat “on a random mound being picked over by desperate urchins.” Smith concludes: “The valley is softer, more silent than purgatory, a kind of benevolent holding center.” It’s an image that first appears in Dot and Tot of Merryland by L. Frank Baum, who describes the Valley of Lost Things as “covered with thousands and thousands of pins…A great pyramid of thimbles, of all sizes and made of many different materials. Further on were piles of buttons, of all shapes and colors imaginable, and there were also vast collections of hairpins, rings, and many sorts of jewelry…A mammoth heap of lead pencils, some short and stubby and worn, and others long and almost new.”
I encountered the story of the black coat in the recent wonderful essay “When Things Go Missing” by Kathryn Schulz in The New Yorker, in which she, like Smith, uses the disappearance of physical objects as an entry point for exploring other kinds of loss. After a very funny opening in which she discusses a short period in which she lost her car keys, her wallet, and her friend’s pickup truck, she provides a roundup of the extant advice on finding lost items, including the “suspect” rule that states that most objects are less than two feet from where you think you left them. As it happens, I’m familiar with that rule, which appears in How to Find Lost Objects by Professor Solomon, which I’ve quoted here before. Personally, I like his idea of the Eureka Zone, the eighteen-inch radius that he recommends we measure with a ruler and then explore meticulously. It’s a codification of the practical insight that our mistakes rarely travel far from their point of origin. Joe Armstrong, the creator of the programming language Erlang, makes a similar point in the book Coders at Work:
Then there’s—I don’t know if I read it somewhere or if I invented it myself—Joe’s Law of Debugging, which is that all errors will be plus/minus three statements of the place where you last changed the program…It’s the same everywhere. You fix your car and it goes wrong—it’s the last thing you did. You changed something—you just have to remember what it was. It’s true with everything.
By this logic, the Valley of Lost Things is all around us, and we’re wandering through it with various degrees of incomprehension. As Daniel Boone is supposed to have said: “I have never been lost, but I will admit to being confused for several weeks.”
I’ve been thinking of the loss and retrieval of objects a lot recently, in my unexpected role as biographer and amateur archivist. When I began my research for Astounding, I had to start by recovering countless scraps of information that must once have seemed obvious. Even something as basic as the number and names of John W. Campbell’s children turned out to be hard to verify, and there are equally immense facts, like how he met his first wife, that seem to have vanished into the Valley of Lost Things forever. (Not even his own daughter knows the answer to that last one.) I also have thousands of seemingly minor details that I hope to assemble into some kind of portrait, and they’re vulnerable to loss as well. I’ve spoken before about the challenge of keeping my notes straight, and how I’ve basically resorted to throwing everything into four huge text files and trusting in its searchability. Mostly, it works, but sometimes it doesn’t. During the editing process for my Longreads article on L. Ron Hubbard, a very diligent fact checker sent me questions about more than fifty individual statements, for which I had to dig up citations or revise the language for accuracy. I was able to find just about everything he mentioned, but one detail—about Hubbard’s hair, of all things—was frustratingly elusive, and it had to come out. Similarly, as I work on the book, I’ll occasionally come across a statement in my notes that I can’t find in my sources, and I have no idea where it came from. This has only happened once or twice, but whenever it does, it feels as if I’ve carelessly let something slip back into the Valley of the Lost, and I’ve let my subject down.
But as Proust knew, it’s in the search for lost things, however trivial, that we also find deeper meaning. As a biographer, I’m haunted by Borges’s devastating putdown: “One life of Poe consists of seven hundred octavo pages; the author, fascinated by changes of residence, barely manages one parenthesis for the Maelstrom or the cosmogony of ‘Eureka.’” I’ve often found myself obsessed by exactly those “changes of residence,” but it’s only in the accumulation of such material that the big picture starts to emerge, and the search often means more than the goal. If there’s one thing I’ve learned along the way, it’s that a dead end almost always turns into a doorway. Whenever I’ve had to deal with a frustrating absence of of information, it invariably becomes a blessing, because it forces me to talk to real people and leave my comfort zone to find what I need, which never would have happened if it had been there for the taking. The most beautiful description I’ve found of the Valley of Lost Objects is in The Book of the Damned by Charles Fort, who calls it the Super-Sargasso Sea:
Derelicts, rubbish, old cargoes from interplanetary wrecks; things cast out into what is called space by convulsions of other planets, things from the times of the Alexanders, Caesars and Napoleons of Mars and Jupiter and Neptune; things raised by this earth’s cyclones: horses and barns and elephants and flies and dodoes, moas, and pterodactyls; leaves from modern trees and leaves of the Carboniferous era—all, however, tending to disintegrate into homogeneous-looking muds or dusts, red or black or yellow—treasure-troves for the paleontologists and for the archaeologists—accumulations of centuries—cyclones of Egypt, Greece, and Assyria—fishes dried and hard, there a short time: others there long enough to putrefy.
As Baum notes, however, it’s mostly pins. The paleontologists, archeologists, and biographers comb through it, like “desperate urchins,” and pins are usually all we find. But occasionally there’s a jewel. Or even a beloved coat.
Lord Rowton…says that he once asked Disraeli what was the most remarkable, the most self-sustained and powerful sentence he knew. Dizzy paused for a moment, and then said, “Sufficient unto the day is the evil thereof.”
—Augustus J.C. Hare, The Story of My Life
Disraeli was a politician and a novelist, which is an unusual combination, and he knew his business. Politics and writing have less to do with each other than a lot of authors might like to believe, and the fact that you can create a compelling world on paper doesn’t mean that you can do the same thing in real life. (One of the hidden themes of Astounding is that the skills that many science fiction writers acquired in organizing ideas on the page turned out to be notably inadequate when it came to getting anything done during World War II.) Yet both disciplines can be equally daunting and infuriating to novices, in large part because they both involve enormously complicated projects—often requiring years of effort—that need to be approached one day at a time. A single day’s work is rarely very satisfying in itself, and you have to cling to the belief that countless invisible actions and compromises will somehow result in something real. It doesn’t always happen, and even if it does, you may never get credit or praise. The ability to deal with the everyday tedium of politics or writing is what separates professionals from amateurs. And in both cases, the greatest accomplishments are usually achieved by freaks who can combine an overarching vision with a finicky obsession with minute particulars. As Eugène-Melchior de Vogüé, who was both a diplomat and literary critic, said of Tolstoy, it requires “a queer combination of the brain of an English chemist with the soul of an Indian Buddhist.”
And if you go into either field without the necessary degree of patience, the results can be unfortunate. If you’re a writer who can’t subordinate yourself to the routine of writing on a daily basis, the most probable outcome is that you’ll never finish your novel. In politics, you end up with something very much like what we’ve all observed over the last few weeks. Regardless of what you might think about the presidential refugee order, its rollout was clearly botched, thanks mostly to a president and staff that want to skip over all the boring parts of governing and get right to the good stuff. And it’s tempting to draw a contrast between the incumbent, who achieved his greatest success on reality television, and his predecessor, a detail-oriented introvert who once thought about becoming a novelist. (I’m also struck, yet again, by the analogy to L. Ron Hubbard. He spent most of his career fantasizing about a life of adventure, but when he finally got into the Navy, he made a series of stupid mistakes—including attacking two nonexistent submarines off the coast of Oregon—that ultimately caused him to be stripped of his command. The pattern repeated itself so many times that it hints at a fundamental aspect of his personality. He was too impatient to deal with the tedious reality of life during wartime, which failed to live up to the version he had dreamed of himself. And while I don’t want to push this too far, it’s hard not to notice the difference between Hubbard, who cranked out his fiction without much regard for quality, and Heinlein, a far more disciplined writer who was able to consciously tame his own natural impatience into a productive role at the Philadelphia Navy Yard.)
Which brings us back to the sentence that impressed Disraeli. It’s easy to interpret it as an admonition not to think about the future, which isn’t quite right. We can start by observing that it comes at the end of what The Five Gospels notes is possibly “the longest connected discourse that can be directly attributed to Jesus.” It’s the one that asks us to consider the birds of the air and the lilies of the field, which, for a lot of us, prompts an immediate flashback to The Life of Brian. (“Consider the lilies?” “Uh, well, the birds, then.” “What birds?” “Any birds.” “Why?” “Well, have they got jobs?”) But whether or not you agree with the argument, it’s worth noticing that the advice to focus on the evils of each day comes only after an extended attempt at defining a larger set of values—what matters, what doesn’t, and what, if anything, you can change by worrying. You’re only in a position to figure out how best to spend your time after you’ve considered the big questions. As the physician William Osler put it:
[My ideal is] to do the day’s work well and not to bother about tomorrow. You may say that is not a satisfactory ideal. It is; and there is not one which the student can carry with him into practice with greater effect. To it more than anything else I owe whatever success I have had—to this power of settling down to the day’s work and trying to do it well to the best of my ability, and letting the future take care of itself.
This has important implications for both writers and politicians, as well as for progressives who wonder how they’ll be able to get through the next twenty-four hours, much less the next four years. When you’re working on any important project, even the most ambitious agenda comes down to what you’re going to do right now. In On Directing Film, David Mamet expresses it rather differently:
Now, you don’t eat a whole turkey, right? You take off the drumstick and you take a bite of the drumstick. Okay. Eventually you get the whole turkey done. It’ll probably get dry before you do, unless you have an incredibly good refrigerator and a very small turkey, but that is outside the scope of this lecture.
A lot of frustration in art, politics, and life in general comes from attempting to swallow the turkey in one bite. Jesus, I think, was aware of the susceptibility of his followers to grandiose but meaningless gestures, which is why he offered up the advice, so easy to remember and so hard to follow, to simultaneously focus on the given day while keeping the kingdom of heaven in mind. Nearly every piece of practical wisdom in any field is about maintaining that double awareness. Fortunately, it goes in both directions: small acts of discipline aid us in grasping the whole, and awareness of the whole tells us what to do in the moment. As R.H. Blyth says of Zen: “That is all religion is: eat when you are hungry, sleep when you are tired.” And don’t try to eat the entire turkey at once.
Maybe if I’m part of that mob, I can help steer it in wise directions.
—Homer Simpson, “Whacking Day”
Yesterday, Tesla founder Elon Musk defended his decision to remain on President Trump’s economic advisory council, stating on Twitter: “My goals are to accelerate the world’s transition to sustainable energy and to help make humanity a multi-planet civilization.” A few weeks earlier, Peter Thiel, another member of the PayPal mafia and one of Trump’s most prominent defenders, said obscurely to the New York Times: “Even if there are aspects of Trump that are retro and that seem to be going back to the past, I think a lot of people want to go back to a past that was futuristic—The Jetsons, Star Trek. They’re dated but futuristic.” Musk and Thiel both tend to speak using the language of science fiction, in part because it’s the idiom that they know best. Musk includes Asimov’s Foundation series among his favorite books, and he’s a recipient of the Heinlein Prize for accomplishments in commercial space activities. Thiel is a major voice in the transhumanist movement, and he’s underwritten so much research into seasteading that I’m indebted to him for practically all the technical background of my novella “The Proving Ground.” As Thiel said to The New Yorker several years ago, in words that have a somewhat different ring today:
One way you can describe the collapse of the idea of the future is the collapse of science fiction. Now it’s either about technology that doesn’t work or about technology that’s used in bad ways. The anthology of the top twenty-five sci-fi stories in 1970 was, like, “Me and my friend the robot went for a walk on the moon,” and in 2008 it was, like, “The galaxy is run by a fundamentalist Islamic confederacy, and there are people who are hunting planets and killing them for fun.”
Despite their shared origins at PayPal, Musk and Thiel aren’t exactly equivalent here: Musk has been open about his misgivings toward Trump’s policy on refugees, while Thiel, who seems to have little choice but to double down, had a spokesperson issue the bland statement: “Peter doesn’t support a religious test, and the administration has not imposed one.” Yet it’s still striking to see two of our most visible futurists staking their legacies on a relationship with Trump, even if they’re coming at it from different angles. As far as Musk is concerned, I don’t agree with his reasoning, but I understand it. His decision to serve in an advisory capacity to Trump seems to come down to his relative weighting of two factors, which aren’t mutually exclusive, but are at least inversely proportional. The first is the possibility that his presence will allow him to give advice that will affect policy decisions to some incremental but nontrivial extent. It’s better, this argument runs, to provide a reasonable voice than to allow Trump to be surrounded by nothing but manipulative Wormtongues. The second possibility is that his involvement with the administration will somehow legitimize or enable its policies, and that this risk far exceeds his slight chance of influencing the outcome. It’s a judgment call, and you can assign whatever values you like to those two scenarios. Musk has clearly thought long and hard about it. But I’ll just say that if it turns out that there’s even the tiniest chance that an occasional meeting with Musk—who will be sharing the table with eighteen others—could possibly outweigh the constant presence of Steve Bannon, a Republican congressional majority, and millions of angry constituents in any meaningful way, I’ll eat my copy of the Foundation trilogy.
Musk’s belief that his presence on the advisory council might have an impact on a president who has zero incentive to appeal to anyone but his own supporters is a form of magical thinking. In a way, though, I’m not surprised, and it’s possible that everything I admire in Musk is inseparable from the delusion that underlies this decision. Whatever you might think of them personally, Musk and Thiel are undoubtedly imaginative. In his New Yorker profile, Thiel blamed many of this country’s problems on “a failure of imagination,” and his nostalgia for vintage science fiction is rooted in a longing for the grand gestures that it embodied: the flying car, the seastead, the space colony. Achieving such goals requires not only vision, but a kind of childlike stubbornness that chases a vanishingly small chance of success in the face of all evidence to the contrary. What makes Musk and Thiel so fascinating is their shared determination to take a fortune built on something as prosaic as an online payments system and to turn it into a spaceship. So far, Musk has been much more successful at translating his dreams into reality, and Thiel’s greatest triumph to date has been the destruction of Gawker Media. But they’ve both seen their gambles pay off to an extent that might mislead them about their ability to make it happen again. It’s this sort of indispensable naïveté that underlies Musk’s faith in his ability to nudge Trump in the right direction, and, on a more sinister level, Thiel’s eagerness to convince us to sign up for a grand experiment with high volatility in both directions—even if most of us don’t have the option of fleeing to New Zealand if it all goes up in flames.
This willingness to submit involuntary test subjects to a hazardous cultural project isn’t unique to science fiction fans. It’s the same attitude that led Norman Mailer, when asked about his support of the killer Jack Henry Abbott, to state: “I’m willing to gamble with a portion of society to save this man’s talent. I am saying that culture is worth a little risk.” (And it’s worth remembering that the man whom Abbott stabbed to death, Richard Adan, was the son of Cuban immigrants.) But when Thiel advised us before the election not to take Trump “literally,” it felt like a symptom of the suspension of disbelief that both science fiction writers and startup founders have to cultivate:
I think a lot of the voters who vote for Trump take Trump seriously but not literally. And so when they hear things like the Muslim comment or the wall comment or things like that, the question is not “Are you going to build a wall like the Great Wall of China?” or, you know, “How exactly are you going to enforce these tests?” What they hear is “We’re going to have a saner, more sensible immigration policy.”
We’ll see how that works out. But in the meantime, the analogy to L. Ron Hubbard is a useful one. Plenty of science fiction writers, including John W. Campbell, A.E. van Vogt, and Theodore Sturgeon, were persuaded by dianetics, in part because it struck them as a risky idea with an unlimited upside. Yet whatever psychological benefits dianetics provided—and it probably wasn’t any less effective than many forms of talk therapy—were far outweighed by the damage that Hubbard and his followers inflicted. It might help to mentally replace the name “Trump” with “Hubbard” whenever an ethical choice needs to be made. What would it mean to take Hubbard “seriously, but not literally?” And if Hubbard asked you to join his board of advisors, would it seem likely that you could have a positive influence, even if it meant adding your name to the advisory council of the Church of Scientology? Or would it make more sense to invest the same energy into helping those whose lives the church was destroying?
I do know that I could form a political platform, for instance, which would encompass the support of the unemployed, the industrialist and the clerk and day laborer all at one and the same time. And enthusiastic support it would be.
Yesterday, my article “Xenu’s Paradox: The Fiction of L. Ron Hubbard and the Making of Scientology” was published on Longreads. I’d been working on this piece, off and on, for the better part of a year, almost from the moment I knew that I was going to be writing the book Astounding. As part of my research, I had to read just about everything Hubbard ever wrote in the genres of science fiction and fantasy, and I ended up working my way through well over a million words of his prose. The essay that emerged from this process was inspired by a simple question. Hubbard clearly didn’t much care for science fiction, and he wrote it primarily for the money. Yet when the time came to invent a founding myth for Scientology, he turned to the conventions of space opera, which had previously played a minimal role in his work. Both his critics and his followers have looked hard at his published stories to find hints of the ideas to come, and there are a few that seem to point toward later developments. (One that frequently gets mentioned is “One Was Stubborn,” in which a fake religious messiah convinces people to believe in the nonexistence of matter so that he can rule the universe. There’s circumstantial evidence, however, that the premise came mostly from John W. Campbell, and that Hubbard wrote it up on the train ride home from New York to Puget Sound.) Still, it’s a tiny fraction of the whole. And such stories by other writers as “The Double Minds” by Campbell, “Lost Legacy” by Robert A. Heinlein, and The World of Null-A by A.E. van Vogt make for more compelling precursors to dianetics than anything Hubbard ever wrote.
The solution to the mystery, as I discuss at length in the article, is that Hubbard tailored his teachings to the small circle of followers he had available after his blowup with Campbell, many of whom were science fiction fans who owed their first exposure to his ideas to magazines like Astounding. And this was only the most dramatic and decisive instance of a pattern that is visible throughout his life. Hubbard is often called a fabulist who compulsively embellished own accomplishments and turned himself into something more than he really was. But it would be even more accurate to say that Hubbard transformed himself into whatever he thought the people around him wanted him to be. When he was hanging out with members of the Explorers Club, he became a barnstormer, world traveler, and intrepid explorer of the Caribbean and Alaska. Around his fellow authors, he presented himself as the most productive pulp writer of all time, inflating his already impressive word count to a ridiculous extent. During the war, he spun stories about his exploits in battle, claiming to have been repeatedly sunk and wounded, and even a former naval officer as intelligent and experienced as Heinlein evidently took him at his word. Hubbard simply became whatever seemed necessary at the time—as long as he was the most impressive man in the room. It wasn’t until he found himself surrounded by science fiction fans, whom he had mostly avoided until then, that he assumed the form that he would take for the rest of his career. He had never been interested in past lives, but many of his followers were, and the memories that they were “recovering” in their auditing sessions were often colored by the imagery of the stories they had read. And Hubbard responded by coming up with the grandest, most unbelievable space opera saga of them all.
This leaves us with a few important takeaways. The first is that Hubbard, in the early days, was basically harmless. He had invented a colorful background for himself, but he wasn’t alone: Lester del Rey, among others, seems to have engaged in the same kind of self-mythologizing. His first marriage wasn’t a happy one, and he was always something of a blowhard, determined to outshine everyone he met. Yet he also genuinely impressed John and Doña Campbell, Heinlein, Asimov, and many other perceptive men and women. It wasn’t until after the unexpected success of dianetics that he grew convinced of his own infallibility, casting off such inconvenient collaborators as Campbell and Joseph Winter as obstacles to his power. Even after he went off to Wichita with his remaining disciples, he might have become little more than a harmless crank. As he began to feel persecuted by the government and professional organizations, however, his mood curdled into something poisonous, and it happened at a time in which he had undisputed authority over the people around him. It wasn’t a huge kingdom, but because of its isolation—particularly when he was at sea—he was able to exercise a terrifying amount of control over his closest followers. Hubbard didn’t even enjoy it. He had wealth, fame, and the adulation of a handful of true believers, but he grew increasingly paranoid and miserable. At the time of his death, his wrath was restricted to his critics and to anyone within arm’s reach, but he created a culture of oppression that his successor cheerfully extended against current and former members in faraway places, until no one inside or outside the Church of Scientology was safe.
I wrote the first draft of this essay in May of last year, but it’s hard to read it now without thinking of Donald Trump. Like Hubbard, Trump spent much of his life as an annoying but harmless windbag: a relentless self-promoter who constantly inflated his own achievements. As with Hubbard, everything that he did had to be the biggest and best, and until recently, he was too conscious of the value of his own brand to risk alienating too many people at once. After a lifetime of random grabs for attention, however, he latched onto a cause—the birther movement—that was more powerful than anything he had encountered before, and, like Hubbard, he began to focus on the small number of passionate followers he had attracted. His presidential campaign seems to have been conceived as yet another form of brand extension, culminating in the establishment of a Trump Television network. He shaped his message in response to the crowds who came to his rallies, and before long, he was caught in the same kind of cycle: a man who had once believed in nothing but himself gradually came to believe his own words. (Hubbard and Trump have both been described as con men, but the former spent countless hours auditing himself, and Trump no longer seems conscious of his own lies.) Both fell upward into positions of power that exceeded their wildest expectations, and it’s frightening to consider what might come next, when we consider how Hubbard was transformed. During his lifetime, Hubbard had a small handful of active followers; the Church of Scientology has perhaps 30,000, although, like Trump, they’re prone to exaggerate such numbers; Trump has millions. It’s especially telling that both Hubbard and Trump loved Citizen Kane. I love it, too. But both men ended up in their own personal Xanadu. And as I’ve noted before, the only problem with that movie is that our affection for Orson Welles distracts us from the fact that Kane ultimately went crazy.
Over the last few months, there’s been a surprising flurry of film and television activity involving the writers featured in my upcoming book Astounding. SyFy has announced plans to adapt Robert A. Heinlein’s Stranger in the Strange Land as a miniseries, with an imposing creative team that includes Hollywood power broker Scott Rudin and Zodiac screenwriter James Vanderbilt. Columbia is aiming to reboot Starship Troopers with producer Neal H. Mortiz of The Fast and the Furious, prompting Paul Verhoeven, the director of the original, to comment: “Going back to the novel would fit very much in a Trump presidency.” The production company Legendary has bought the film and television rights to Dune, which first appeared as a serial edited by John W. Campbell in Analog. Meanwhile, Jonathan Nolan is apparently still attached to an adaptation of Isaac Asimov’s Foundation, although he seems rather busy at the moment. (L. Ron Hubbard remains relatively neglected, unless you want to count Leah Remini’s new show, which the Church of Scientology would probably hope you wouldn’t.) The fact that rights have been purchased and press releases issued doesn’t necessarily mean that anything will happen, of course, although the prospects for Stranger in a Strange Land seem strong. And while it’s possible that I’m simply paying more attention to these announcements now that I’m thinking about these writers all the time, I suspect that there’s something real going on.
So why the sudden surge of interest? The most likely, and also the most heartening, explanation is that we’re experiencing a revival of hard science fiction. Movies like Gravity, Interstellar, The Martian, and Arrival—which I haven’t seen yet—have demonstrated that there’s an audience for films that draw more inspiration from Clarke and Kubrick than from Star Wars. Westworld, whatever else you might think of it, has done much the same on television. And there’s no question that the environment for this kind of story is far more attractive now than it was even ten years ago. For my money, the most encouraging development is the movie Life, a horror thriller set on the International Space Station, which is scheduled to come out next summer. I’m tickled by it because, frankly, it doesn’t look like anything special: the trailer starts promisingly enough, but it ends by feeling very familiar. It might turn out to be better than it looks, but I almost hope that it doesn’t. The best sign that a genre is reaching maturity isn’t a series of singular achievements, but the appearance of works that are content to color inside the lines, consciously evoking the trappings of more visionary movies while remaining squarely focused on the mainstream. A film like Interstellar is always going to be an outlier. What we need are movies like what Life promises to be: a science fiction film of minimal ambition, but a certain amount of skill, and a willingness to copy the most obvious features of its predecessors. That’s when you’ve got a trend.
The other key development is the growing market for prestige dramas on television, which is the logical home for Stranger in a Strange Land and, I think, Dune. It may be the case, as we’ve been told in connection with Star Trek: Discovery, that there isn’t a place for science fiction on a broadcast network, but there’s certainly room for it on cable. Combine this with the increased appetite for hard science fiction on film, and you’ve got precisely the conditions in which smart production companies should be snatching up the rights to Asimov, Heinlein, and the rest. Given the historically rapid rise and fall of such trends, they shouldn’t expect this window to remain open for long. (In a letter to Asimov on February 3, 1939, Frederik Pohl noted the flood of new science fiction magazines on newsstands, and he concluded: “Time is indeed of the essence…Such a condition can’t possibly last forever, and the time to capitalize on it is now; next month may be too late.”) What they’re likely to find, in the end, is that many of these stories are resistant to adaptation, and that they’re better off seeking out original material. There’s a reason that there have been so few movies derived from Heinlein and Asimov, despite the temptation that they’ve always presented. Heinlein, in particular, seems superficially amenable to the movies: he certainly knew how to write action in a way that Asimov couldn’t. But he also liked to spend the second half of a story picking apart the assumptions of the first, after sucking in the reader with an exciting beginning, and if you aren’t going to include the deconstruction, you might as well write something from scratch.
As it happens, the recent spike of action on the adaptation front has coincided with another announcement. Analog, the laboratory in which all these authors were born, is cutting back its production schedule to six double issues every year. This is obviously intended to manage costs, and it’s a reminder of how close to the edge the science fiction digests have always been. (To be fair, the change also coincides with a long overdue update of the magazine’s website, which is very encouraging. If this reflects a true shift from print to online, it’s less a retreat than a necessary recalibration.) It’s easy to contrast the game of pennies being played at the bottom with the expenditure of millions of dollars at the top, but that’s arguably how it has to be. Analog, like Astounding before it, was a machine for generating variations, which needs to be done on the cheap. Most stories are forgotten almost at once, and the few that survive the test of time are the ones that get the lion’s share of resources. All the while, the magazine persists as an indispensable form of research and development—a sort of skunk works that keeps the entire enterprise going. That’s been true since the beginning, and you can see this clearly in the lives of the writers involved. Asimov, Heinlein, Herbert, and their estates became wealthy from their work. Campbell, who more than any other individual was responsible for the rise of modern science fiction, did not. Instead, he remained in his little office, lugging manuscripts in a heavy briefcase twice a week on the train. He was reasonably well off, but not in a way that creates an empire of valuable intellectual property. Instead, he ran the lab. And we can see the results all around us.
If you’re familiar with the science fiction of the golden age, you’ve probably come across the name of Alfred Korzybski, the Polish philosopher whose ideas, known as general semantics, enjoyed a brief but intense vogue with writers and fans in the late thirties and early forties. Korzybski’s work provided the backdrop for A.E. van Vogt’s The World of Null-A and its sequels; Robert A. Heinlein mentions him by name in “Coventry” and “Gulf”; and he pops up in such stories as “The Helping Hand” by Poul Anderson and “Day of the Moron” by H. Beam Piper. He was also an important influence on L. Ron Hubbard and John W. Campbell, although both of them would have denied this. (Campbell liked to say that he was never able to get through Korzybski’s most famous book, Science and Sanity, and it’s fair to say that Hubbard never did, either.) And it isn’t hard to see why the science fiction community found him so intriguing. General semantics was pitched as a kind of mental training program that would enhance the brain’s performance, allowing practitioners to think more clearly and move past the mental blocks that prevent us from accurately perceiving the world around us. Yet Korzybski remains relatively unknown today. Part of this is because Science and Sanity itself is such a daunting work: it’s long, repetitive, sometimes obscure, and often deeply weird. But there’s also a lot there that remains valuable to creative thinkers, if you’re willing to unearth it, and with certain qualifications, it’s still worth seeking out.
We can start with Korzybski’s most famous pronouncement, which a lot of people, including me, have quoted without fully understanding it: “The map is not the territory.” What he’s really talking about is language, which is the mental map that we use to orient ourselves as we make our way through the world. The trouble, he believes, is that the map we’ve inherited offers a flawed picture of reality. Language was developed when mankind was still in its infancy, and the inaccurate ideas that early humans had about the world are preserved in the way that we talk about it. We confuse words with their underlying objects; we take objects in isolation, when in fact they have meaning only in their relationships with others and in their place within an overall structure; we think in categories, when we’re invariably dealing with unique individuals; and we depend on preconceived ideas, rather than experience, to make our decisions. The primary culprit, Korzybski argued, was the word “is,” which always involves either a tautology or a falsehood. When we say that A is B, we’re either saying that it’s equivalent to itself, which doesn’t yield any useful information, or we’re falling prey to one of several fallacies. Either we’re saying that one unique object is identical to another; that an object is the same thing as the label we’ve given it, or to the overall class to which it belongs; or that it can be described in terms that can be agreed upon by all observers. And a moment’s reflection reveals that none of this is true.
Most of us, I think, will grant these points. What set Korzybski apart is that he attempted to train himself and others to systematically overcome these misconceptions, using a few misleadingly simple tricks. He advised his readers to be skeptical of any form of the verb “to be,” and that whenever they were told that something was the same as something else, they should reflexively respond: “This is not that.” The goal, he said, was “consciousness of abstracting,” or a constant, everyday awareness of how we think using different orders of abstractions. Words are not objects; objects are distinct from the inferences that we make about them; and the gap between the general and the particular means that no statement can be entirely true or false, but only probable in various degrees. To underline these points, Korzybski liked to use a model called the Structural Differential, a teaching aid made out of wooden pegboards and lengths of string that were supposed to symbolize the abstracting process of the human nervous system. Students were told to study and handle it in silence, which would nonverbally remind them of the difference between an event, an object, a label, and the levels of abstraction above it. If this all sounds like an unwieldy way of seeing the world, well, it is. But it’s all in service of what seems to me like a worthwhile goal: to insert a mental pause, or what Korzybski calls “the neurological delay,” before we unthinkingly respond to a statement or situation.
If we think of general semantics as an elaborate system for training us to pause to question our assumptions, it becomes a lot more comprehensible. It’s also worth noting that Korzbyski wasn’t opposed to abstraction, which he saw as a necessary tool and shortcut, but to its misuse. The ability for one generation to build on the abstractions developed by its predecessors, which he calls “time-binding,” is what separates human beings from the animals—but only if we’re good at it. Conventional language, which Korzybski associated with the followers of Aristotle, just makes it harder to pass along useful information; his non-Aristotelean approach was pitched as a more accurate reflection of reality, as well as a practical tool for generating and conveying ideas. And it’s probably worth a try. (If you don’t feel like plowing through all eight hundred pages of Science and Sanity, Korzybski advises readers to start with the shorter, self-contained section “The Mechanism of Time-Binding,” which includes most of the book’s practical advice.) Pausing before you think, interrogating your assumptions, and being conscious of your abstractions are all worthwhile goals, but they’re easier said than done: one of Korzybski’s followers later estimated that “about thirty” people had mastered it. You could argue that Korzybski overstated his case, that he exaggerated the benefits of his approach, and that he cloaked it in a lot of unnecessary pseudoscience. But he was right about the basic problem. And it’s easy to wish that we lived in a society in which we responded to all disagreements by pausing, smiling, and asking sincerely: “What do you mean?”