Posts Tagged ‘Robert A. Heinlein’
The Bad Pennies, Part 3
On September 4, 1964, the annual World Science Fiction Convention opened its doors at the Hotel Leamington in Oakland, California. The guests of honor included Leigh Brackett, Edmond Hamilton, and Forrest Ackerman, with Anthony Boucher serving as toastmaster, but the conversation that weekend was dominated by a fan who wasn’t there. After a heated debate, Walter H. Breen had been banned from attendance by the convention committee, for reasons that were outlined by Bill Donaho in a special fanzine issue titled “The Great Breen Boondoggle, or All Berkeley Is Plunged into War.” (The newsletter was privately circulated, and Donaho asked that it not be quoted, but the complete text can be found online.) As Donaho makes abundantly clear, it was common knowledge among local fans that Breen—who had moved to Berkeley in the fifties—was a serial abuser of children. Four cases are described in detail, with allusions to numerous others. I won’t quote them here, but they’re horrifying, both in themselves and in the length of time over which they took place. Donaho writes:
Walter’s recent behavior has been getting many Berkeley parents not just alarmed, but semi-hysterical. If Walter is in the same room with a young boy, he never takes his eyes off the kid. He’ll be semi-abstractedly talking to someone else, but his eyes will be on the boy. And if the kid goes to the bathroom, Walter gets up and follows him in…Knowing Walter I can readily believe that he was completely oblivious to the obvious signs of strong objection. Those who say Walter is a child are right and as a child he is completely oblivious to other people’s desires and wishes unless hit on the head with them.
In the meantime, the prominent fan Alva Rogers said that he felt “great reluctance” to exclude anyone from the community, and he had a novel solution to ensure the safety of his own children whenever Breen came to visit: “He wanted to protect his kids of course, but that the situation was adequately handled at his house by having them barricade themselves in their room.”
But the most unbelievable aspect of the entire story is that no one involved seems to have disputed the facts themselves. What remained a source of controversy—both before the convention and long afterward—was the appropriate action to take, if any, against Breen. As Donaho writes of the reactions of two influential fans, with the name of a child redacted:
They swung between two points of view. “We must protect T—” and “We’re all kooks. Walter is just a little kookier than the rest of us. Where will it all end if we start rejecting people because they’re kooky?” So they swung from on the one hand proposing that if Walter wasn’t to be expelled, then the banning from individual homes should be extended so that club meetings were only held in such homes, and on the other hand calling the whole series of discussions “McCarthyite” and “Star Chamber.” “I don’t want Walter around T—, but if we do such a horrible thing as expelling him, I’ll quit fandom.”
On a more practical level, some of the organizers were concerned that if they banned Breen, they would also lose Marion Zimmer Bradley, who married him shortly before the convention began. When informed of the controversy, Breen explicitly threatened to keep Bradley away, which led to much consternation. Donaho explains: “Many of us like Marion and all this is not a very pleasant welcome to Berkeley for her. Not to mention the fact that it’s going to severely strain her relations with almost all Berkeley fans, since naturally she will defend Walter…We feel that she most probably at least knows about some of Walter’s affairs with adolescent males but believes in tolerance.”
Even after the decision was made, the wounds remained raw, and many writers and fans seemed to frame the entire incident primarily in terms of its impact on the community. In the second volume of his biography In Dialogue With His Century, William H. Patterson quotes a letter that Heinlein sent to Marion Zimmer Bradley on July 15, 1965:
The fan nuisance we were subjected to was nothing like as nasty as the horrible things that were done to you two but it was bad enough that we could get nothing else done during the weeks it went on and utterly spoiled what should have been a pleasant, happy winter. But it resulted in a decision which has made our life much pleasanter already…We have cut off all contact with organized fandom. I regret that we will miss meeting some worthwhile people in the future as the result of this decision. But the percentage of poisonous jerks in the ranks of fans makes the price too high; we’ll find our friends elsewhere.
Patterson, typically, doesn’t scrutinize this statement, moving on immediately to an unrelated story about Jerry Pournelle with the transition: “Fortunately, not all their fan interactions were were so unpleasant.” His only discussion of the incident takes the form of a footnote in which he quotes “a good short discussion” of the Breendoggle from a glossary of fan terms: “The sole point fans on both sides can agree upon is that the resulting feud had long-lasting effects [and] tore the fabric of the microcosm beyond repair…The opposing forces retired to lick their wounds and assure themselves that they had been undeniably right while the other side had been unmistakably wrong.”
By now, I hope that we can arrive at another “single point” of agreement, which is that fandom, in its effort to see itself as a place of inclusiveness for the “kooks,” disastrously failed to protect Breen’s victims. In 1991, Breen was charged with eight felony counts of child molestation and sentenced to ten years in prison—which led in turn to a comparable moment of reckoning in another subculture in which he had played an even more prominent role. Breen was renowned among coin collectors as the author of such reference works as the Complete Encyclopedia of U.S. and Colonial Coins, and the reaction within the world of numismatics was strikingly similar to what had taken place a quarter of a century earlier in Berkeley. As Charles Morgan and Hubert Walker write in an excellent article in CoinWeek:
Even in 1991, with the seeming finality of a confession and a ten-year prison sentence, it was like the sci-fi dustups of the 1960s all over again. This time, however, it was coin collectors and fans of Breen’s numismatic work that came to his defense. One such defender was fellow author John D. Wright, who wrote a letter to Coin World that stated: “My friend Walter Breen has confessed to a sin, and for this, other friends of mine have picked up stones to throw at him.” Wright criticized the American Numismatic Association for revoking Breen’s membership mere weeks after awarding him the Heath Literary Award, saying that while he did not condone Breen’s “lewd and lascivious acts,” he did not see the charge, Breen’s guilty plea or subsequent conviction as “reason for expulsion from the ANA or from any other numismatic organization.”
It’s enough to make you wonder if anything has changed in the last fifty years—but I think that it has. And the best example is the response to a more recent series of revelations about the role of Marion Zimmer Bradley. I’ll dive into this in greater detail tomorrow, in what I hope will be my final post on the subject.
The unknown future
During the writing of Astounding, I often found myself wondering how much control an editor can really have. John W. Campbell is routinely described as the most powerful and influential figure in the history of science fiction, and there’s no doubt that the genre would look entirely different if he were somehow lifted out of the picture. Yet while I never met Campbell, I’ve spoken with quite a few other magazine editors, and my sense is that it can be hard to think about reshaping the field when you’re mostly just concerned with getting out the current issue—or even with your very survival. The financial picture for science fiction magazines may have darkened over the last few decades, but it’s always been a challenge, and it can be difficult to focus on the short term while also keeping your larger objectives in mind. Campbell did it about as well as anyone ever did, but he was limited by the resources at his disposal, and he benefited from a few massive strokes of luck. I don’t think he would have had nearly the same impact if Heinlein hadn’t happened to show up within the first year and a half of his editorship, and you could say much the same of the fortuitous appearance of the artist Hubert Rogers. (By November 1940, Campbell could write: “Rogers has a unique record among science fiction artists: every time he does a cover, the author of the story involved writes him fan mail, and asks me for the cover original.”) In the end, it wasn’t the “astronomical” covers that improved the look of the magazine, but the arrival and development of unexpected talent. And much as Heinlein’s arrival on the scene was something that Campbell never could have anticipated, the advent of Rogers did more to heighten the visual element of Astounding than anything that the editor consciously set out to accomplish.
Campbell, typically, continued to think in terms of actively managing his magazines, and the pictorial results were the most dramatic, not in Astounding, but in Unknown, the legendary fantasy title that he launched in 1939. (His other great effort to tailor a magazine to his personal specifications involved the nonfiction Air Trails, which is a subject for another post.) Unlike Astounding, Unknown was a project that Campbell could develop from scratch, and he didn’t have to deal with precedents established by earlier editors. The resulting stories were palpably different from most of the other fantasy fiction of the time. (Algis Budrys, who calls Campbell “the great rationalizer of supposition,” memorably writes that the magazine was “more interested in the thermodynamics and contract law of a deal with the devil than with just what a ‘soul’ might actually be.”) But this also extended to the visual side. Campbell told his friend Robert Swisher that all elements, including page size, were discussed “carefully and without prejudice” with his publisher, and for the first year and a half, Unknown featured some of the most striking art that the genre had ever seen, with beautiful covers by H.W. Scott, Manuel Rey Isip, Modest Stein, Graves Gladney, and Edd Carter. But the editor remained dissatisfied, and on February 29, 1940, he informed Swisher of a startling decision:
We’re gonna pull a trick on Unknown presently. Probably the July issue will have no picture on the cover—just type. We have hopes of chiseling it outta the general pulp group, and having a few new readers mistake it for a different type. It isn’t straight pulp, and as such runs into difficulties because the adult type readers who might like it don’t examine the pulp racks, while the pulp-type reader in general wouldn’t get much out of it.
The italics are mine. Campbell had tried to appeal to “the adult type readers” by running more refined covers on Astounding, and with Unknown, his solution was to essentially eliminate the cover entirely. Writing to readers of the June 1940 issue to explain the change, the editor did his best to spin it as a reflection of the magazine’s special qualities:
Unknown simply is not an ordinary magazine. It does not, generally speaking, appeal to the usual audience of the standard-type magazine. We have decided on this experimental issue, because of this, in an effort to determine what other types of newsstand buyers might be attracted by a somewhat different approach.
In the next paragraph, Campbell ventured a curious argument: “To the nonreader of fantasy, to one who does not understand the attitude and philosophy of Unknown, the covers may appear simply monstrous rather than the semicaricatures they are. They are not, and have not been intended as, illustrations, but as expressive of a general theme.” Frankly, I doubt that many readers saw the covers as anything but straight illustrations, and in the following sentence, the editor made an assertion that seems even less plausible: “To those who know and enjoy Unknown, the cover, like any other wrapper, is comparatively unimportant.”
In a separate note, Campbell asked for feedback on the change, but he also made his own feelings clear: “We’re going to ask your newsdealer to display [Unknown] with magazines of general class—not with the newsprints. And we’re asking you—do you like the more dignified cover? Isn’t it much more fitting for a magazine containing such stories?” A few months later, in the October 1940 issue, a number of responses were published in the letters column. The reaction was mostly favorable—although Campbell may well have selected letters that supported his own views—but reasonable objections were also raised. One reader wrote: “How can you hope to win new readers by a different cover if the inside illustrations are as monstrous, if not more so, than have any previous covers ever been? If you are trying to be more dignified in your illustrations, be consistent throughout the magazine.” On a more practical level, another fan mentioned one possible shortcoming of the new approach: “The July issue was practically invisible among the other publications, and I had to hunt somewhat before I located it.” But it was too late. Unknown may have been the greatest pulp magazine of all time, but along the way, it rejected the entire raison d’être of the pulp magazine cover itself. And while I can’t speak for the readers of the time, I can say that it saddens me personally. Whenever I’m browsing through a box of old pulps, I feel a pang of disappointment when I come across one of the later Unknown covers, and I can only imagine what someone like Cartier might have done with Heinlein’s The Unpleasant Profession of Jonathan Hoag, or even Hubbard’s Fear. Unknown ran for another three years with its plain cover, which is about the same amount of time that it took for Astounding to reach its visual peak. It might have evolved into something equally wonderful, but we’ll never know—because Campbell decided that he had to kill the cover in order to save it.
The planetary chauvinists
In a profile in the latest issue of Wired, the journalist Steven Levy speaks at length with Jeff Bezos, the world’s richest man, about his dream of sending humans permanently into space. Levy was offered a rare glimpse into the operations of the Amazon founder’s spaceflight company, Blue Origin, but it came with one condition: “I had to promise that, before I interviewed [Bezos] about his long-term plans, I would watch a newly unearthed 1975 PBS program.” He continues:
So one afternoon, I opened my laptop and clicked on the link Bezos had sent me. Suddenly I was thrust back into the predigital world, where viewers had more fingers than channels and remote shopping hadn’t advanced past the Sears catalog. In lo-res monochrome, a host in suit and tie interviews the writer Isaac Asimov and physicist Gerard O’Neill, wearing a cool, wide-lapeled blazer and white turtleneck. To the amusement of the host, O’Neill describes a future where some ninety percent of humans live in space stations in distant orbits of the blue planet. For most of us, Earth would be our homeland but not our home. We’d use it for R&R, visiting it as we would a national park. Then we’d return to the cosmos, where humanity would be thriving like never before. Asimov, agreeing entirely, called resistance to the concept “planetary chauvinism.”
The discussion, which was conducted by Harold Hayes, was evidently lost for years before being dug up in a storage locker by the Space Studies Institute, the organization that O’Neill founded in the late seventies. You can view the entire program here, and it’s well worth watching. At one point, Asimov, whom Hayes describes as “our favorite jack of all sciences,” alludes briefly to my favorite science fiction concept, the gravity gauge: “Well once you land on the moon, you know the moon is a lot easier to get away from than the earth is. The earth has a gravity six times as strong as that of the moon at the surface.” (Asimov must have known all of this without having to think twice, but I’d like to believe that he was also reminded of it by The Moon is a Harsh Mistress.) And in response to the question of whether he had ever written about space colonies in his own fiction, Asimov gives his “legendary” response:
Nobody did, really, because we’ve all been planet chauvinists. We’ve all believed people should live on the surface of a planet, of a world. I’ve had colonies on the moon—so have a hundred other science fiction writers. The closest I came to a manufactured world in free space was to suggest that we go out to the asteroid belt and hollow out the asteroids, and make ships out of them [in the novelette “The Martian Way”]. It never occurred to me to bring the material from the asteroids in towards the earth, where conditions are pleasanter, and build the worlds there.
Of course, it isn’t entirely accurate that science fiction writers had “all” been planet chauvinists—Heinlein had explored similar concepts in such stories as “Waldo” and “Delilah and the Space Rigger,” and I’m sure there are other examples. (Asimov had even discussed the idea ten years earlier in the essay “There’s No Place Like Spome,” which he later described as “an anticipation, in a fumbling sort of way, of Gerard O’Neill’s concept of space settlements.”) And while there’s no doubt that O’Neill’s notion of a permanent settlement in space was genuinely revolutionary, there’s also a sense in which Asimov was the last writer you’d expect to come up with it. Asimov was a notorious acrophobe and claustrophile who hated flying and suffered a panic attack on the roller coaster at Coney Island. When he was younger, he loved enclosed spaces, like the kitchen at the back of his father’s candy store, and he daydreamed about running a newsstand on the subway, where he could put up the shutters and just read magazines. Years later, he refused to go out onto the balcony of his apartment, which overlooked Central Park, because of his fear of heights, and he was always happiest while typing away in his office. And his personal preferences were visible in the stories that he wrote. The theme of an enclosed or underground city appears in such stories as The Caves of Steel, while The Naked Sun is basically a novel about agoraphobia. In his interview with Hayes, Asimov speculates that space colonies will attract people looking for an escape from earth: “Once you do realize that you have a kind of life there which represents a security and a pleasantness that you no longer have on earth, the difficulty will be not in getting people to go but in making them line up in orderly fashion.” But he never would have gone there voluntarily.
Yet this is a revealing point in itself. Unlike Heinlein, who dreamed of buying a commercial ticket to the moon, Asimov never wanted to go into space. He just wanted to write about it, and he was better—or at least more successful—at this than just about anybody else. (In his memoirs, Asimov recalls taping the show with O’Neill on January 7, 1975, adding that he was “a little restless” because he was worried about being late for dinner with Lester and Judy-Lynn del Rey. After he was done, he hailed a cab. On the road, as they were making the usual small talk, the driver revealed that he had once wanted to be a writer. Asimov, who hadn’t mentioned his name, told him consolingly that no one could make a living as writer anyway. The driver responded: “Isaac Asimov does.”) And the comparison with Bezos is an enlightening one. Bezos obviously built his career on books, and he was a voracious reader of science fiction in his youth, as Levy notes: “[Bezos’s] grandfather—a former top Defense Department official—introduced him to the extensive collection of science fiction at the town library. He devoured the books, gravitating especially to Robert Heinlein and other classic writers who explored the cosmos in their tales.” With his unimaginable wealth, Bezos is in a position remarkably close to that of the protagonist in such stories, with the ability to “painlessly siphon off a billion dollars every year to fund his boyhood dream.” But the ideas that he has the money to put into practice were originated by writers and other thinkers whose minds went in unusual directions precisely because they didn’t have the resources, financial or otherwise, to do it personally. Vast wealth can generate a chauvinism of its own, and the really innovative ideas tend to come from unexpected places. This was true of Asimov, as well as O’Neill, whose work was affiliated in fascinating ways with the world of Stewart Brand and the Whole Earth Catalog. I’ll have more to say about O’Neill—and Bezos—tomorrow.
The Rover Boys in the Air
On September 3, 1981, a man who had recently turned seventy reminisced in a letter to a librarian about his favorite childhood books, which he had read in his youth in Dixon, Illinois:
I, of course, read all the books that a boy that age would like—The Rover Boys; Frank Merriwell at Yale; Horatio Alger. I discovered Edgar Rice Burroughs and read all the Tarzan books. I am amazed at how few people I meet today know that Burroughs also provided an introduction to science fiction with John Carter of Mars and the other books that he wrote about John Carter and his frequent trips to the strange kingdoms to be found on the planet Mars.
At almost exactly the same time, a boy in Kansas City was working his way through a similar shelf of titles, as described by one of his biographers: “Like all his friends, he read the Rover Boys series and all the Horatio Alger books…[and] Edgar Rice Burroughs’s wonderful and exotic Mars books.” And a slightly younger member of the same generation would read many of the same novels while growing up in Brooklyn, as he recalled in his memoirs: “Most important of all, at least to me, were The Rover Boys. There were three of them—Dick, Tom, and Sam—with Tom, the middle one, always described as ‘fun-loving.’”
The first youngster in question was Ronald Reagan; the second was Robert A. Heinlein; and the third was Isaac Asimov. There’s no question that all three men grew up reading many of the same adventure stories as their contemporaries, and Reagan’s apparent fondness for science fiction has inspired a fair amount of speculation. In a recent article on Slate, Kevin Bankston retells the famous story of how WarGames inspired the president to ask his advisors about the likelihood of such an incident occurring for real, concluding that it was “just one example of how science fiction influenced his administration and his life.” The Day the Earth Stood Still, which was adapted from a story by Harry Bates that originally appeared in Astounding, allegedly influenced Regan’s interest in the potential effect of extraterrestrial contact on global politics, which he once brought up with Gorbachev. And in the novelistic biography Dutch, Edmund Morris—or his narrative surrogate—ruminates at length on the possible origins of the Strategic Defense Initiative:
Long before that, indeed, [Reagan] could remember the warring empyrean of his favorite boyhood novel, Edgar Rice Burroughs’s Princess of Mars. I keep a copy on my desk: just to flick through it is to encounter five-foot-thick polished glass domes over cities, heaven-filling salvos, impregnable walls of carborundum, forts, and “manufactories” that only one man with a key can enter. The book’s last chapter is particularly imaginative, dominated by the magnificent symbol of a civilization dying for lack of air.
For obvious marketing reasons, I’d love to be able to draw a direct line between science fiction and the Reagan administration. Yet it’s also tempting to read a greater significance into these sorts of connections than they actually deserve. The story of science fiction’s role in the Strategic Defense Initiative has been told countless times, but usually by the writers themselves, and it isn’t clear what impact it truly had. (The definitive book on the subject, Way Out There in the Blue by Frances FitzGerald, doesn’t mention any authors at all by name, and it refers only once, in passing, to a group of advisors that included “a science fiction writer.” And I suspect that the most accurate description of their involvement appears in a speech delivered by Greg Bear: “Science fiction writers helped the rocket scientists elucidate their vision and clarified it.”) Reagan’s interest in science fiction seems less like a fundamental part of his personality than like a single aspect of a vision that was shaped profoundly by the popular culture of his young adulthood. The fact that Reagan, Heinlein, and Asimov devoured many of the same books only tells me that this was what a lot of kids were reading in the twenties and thirties—although perhaps only the exceptionally imaginative would try to live their lives as an extension of those stories. If these influences were genuinely meaningful, we should also be talking about the Rover Boys, a series “for young Americans” about three brothers at boarding school that has now been almost entirely forgotten. And if we’re more inclined to emphasize the science fiction side for Reagan, it’s because this is the only genre that dares to make such grandiose claims for itself.
In fact, the real story here isn’t about science fiction, but about Reagan’s gift for appropriating the language of mainstream culture in general. He was equally happy to quote Dirty Harry or Back to the Future, and he may not even have bothered to distinguish between his sources. In Way Out There in the Blue, FitzGerald brilliantly unpacks a set of unscripted remarks that Reagan made to reporters on March 24, 1983, in which he spoke of the need of rendering nuclear weapons “obsolete”:
There is a part of a line from the movie Torn Curtain about making missiles “obsolete.” What many inferred from the phrase was that Reagan believed what he had once seen in a science fiction movie. But to look at the explanation as a whole is to see that he was following a train of thought—or simply a trail of applause lines—from one reassuring speech to another and then appropriating a dramatic phrase, whose origin he may or may not have remembered, for his peroration.
Take out the world “reassuring,” and we have a frightening approximation of our current president, whose inner life is shaped in real time by what he sees on television. But we might feel differently if those roving imaginations had been channeled by chance along different lines—like a serious engagement with climate change. It might just as well have gone that way, but it didn’t, and we’re still dealing with the consequences. As Greg Bear asks: “Do you want your presidents to be smart? Do you want them to be dreamers? Or do you want them to be lucky?”
Bester of both worlds
Note: To celebrate the World Science Fiction Convention this week in San Jose, I’m republishing a few of my favorite pieces on various aspects of the genre. This post originally appeared, in a slightly different form, on August 11, 2017.
In 1963, the editor Robert P. Mills put together an anthology titled The Worlds of Science Fiction, for which fifteen writers—including Isaac Asimov, Robert A. Heinlein, and Ray Bradbury—were invited to contribute one of their favorite stories. Mills also approached Alfred Bester, the author of the classic novels The Demolished Man and The Stars My Destination, who declined to provide a selection, explaining: “I don’t like any of [my stories]. They’re all disappointments to me. This is why I rarely reread my old manuscripts; they make me sick. And when, occasionally, I come across a touch that pleases me, I’m convinced that I never wrote it—I believe that an editor added it.” When Mills asked if he could pick a story that at least gave him pleasure in the act of writing it, Bester responded:
No. A writer is extremely schizophrenic; he is both author and critic. As an author he may have moments of happiness while he’s creating, but as a critic he is indifferent to his happiness. It cannot influence his merciless appraisal of his work. But there’s an even more important reason. The joy you derive from creating a piece of work has no relationship to the intrinsic value of the work. It’s a truism on Broadway that when an actor particularly enjoys the performance he gives, it’s usually his worst. It’s also true that the story which gives the author the most pain is often his best.
Bester finally obliged with the essay “My Private World of Science Fiction,” which Mills printed as an epilogue. Its centerpiece is a collection of two dozen ideas that Bester plucked from his commonplace book, which he describes as “the heavy leather-bound journal that I’ve been keeping for twenty years.” These scraps and fragments, Bester explains, are his best works, and they inevitably disappoint him when they’re turned into stories. And the bits and pieces that he provides are often dazzling in their suggestiveness: “A circulating brain library in a Womrath’s of the future, where you can rent a brain for any purpose.” “A story about weather smugglers.” “There must be a place where you can go to remember all the things that never happened to you.” And my personal favorite:
The Lefthanded Killer: a tour de force about a murder which (we tell the reader immediately) was committed by a lefthanded killer. But we show, directly or indirectly, that every character is righthanded. The story starts with, “I am the murderer,” and then goes on to relate the mystery, never revealing who the narrator is…The final twist; killer-narrator turns out to be an unborn baby, the survivor of an original pair of twins. The lefthand member killed his righthand brother in the womb. The entire motivation for the strange events that follow is the desire to conceal the crime. The killer is a fantastic and brilliant monster who does not realize that the murder would have gone unnoticed.
Every writer has a collection of story fragments like this—mine takes up a page in a notebook of my own—but few ever publish theirs, and it’s fascinating to wonder at Bester’s motivations for making his unused ideas public. I can think of three possible reasons. The first, and perhaps the most plausible, is that he knew that many of these premises were more interesting in capsule form than when written out as full stories, and so, in acknowledgement of what I’ve called the Borges test, he simply delivered them that way. (He also notes that ideas are cheap: “The idea itself is relatively unimportant; it’s the writer who develops it that makes the big difference…It is only the amateur who worries about ‘his idea being stolen.'”) Another possibility is that he wanted to convey how stray thoughts in a journal like this can mingle and combine in surprising ways, which is one of the high points of any writer’s life:
That’s the wonder of the Commonplace Book; the curious way an incomprehensible note made in 1950 can combine with a vague entry made in 1960 to produce a story in 1970. In A Life in the Day of a Writer, perhaps the most brilliant portrait of an author in action ever painted, Tess Slesinger wrote: “He rediscovered the miracle of something on page twelve tying up with something on page seven which he had not understood when he wrote it…”
Bester concludes of his ideas: “They’ll cross-pollinate, something totally unforeseen will emerge, and then, alas, I’ll have to write the story and destroy it. This is why your best is always what you haven’t written yet.”
Yet the real explanation, I suspect, lies in that line “I’ll have to write the story,” which gets at the heart of Bester’s remarkable career. In reality, Bester is all but unique among major science fiction writers in that he never seemed to “have to write” anything. He contributed short stories to Astounding for a few heady years before World War II, then disappeared for the next decade to do notable work in comic books, radio, and television. Even after he returned, there was a sense that science fiction only occupied part of his attention. He published a mainstream novel, wrote television scripts, and worked as a travel writer and senior editor for the magazine Holiday, and the fact that he had so many ideas that he never used seems to reflect the fact that he only turned to science fiction when he really felt like it. (Bester should have been an ideal writer for John W. Campbell, who, if he could have managed it, would have loved a circle of writers that consisted solely of professional men in other fields who wrote on the side—they were more likely to take his ideas and rewrite to order than either full-time pulp authors or hardcore science fiction fans. And the story of how Campbell alienated Bester over the course of a single meeting is one of the most striking anecdotes from the whole history of the genre.) Most professional writers couldn’t afford to allow their good ideas to go to waste, but Bester was willing to let them go, both because he had other sources of income and because he knew that there was plenty more where that came from. I still think of Heinlein as the genre’s indispensable writer, but Bester might be a better role model, if only because he seemed to understand, rightly, that there were realms to explore beyond the worlds of science fiction.
The science fiction sieve
Note: To celebrate the World Science Fiction Convention this week in San Jose, I’m republishing a few of my favorite pieces on various aspects of the genre. This post originally appeared, in a slightly different form, on June 28, 2017.
In a remarkably lucid essay published last year in Nautilus, the mathematician Noson S. Yanofsky elegantly defines the self-imposed limitations of science. Yanofsky points out that scientists deliberately take a subset of phenomena—characterized mostly by how amenable it is to their chosen methods—for their field of study, while leaving the rest to the social sciences or humanities. (As Paul Valéry put it: “Science means simply the aggregate of all the recipes that are always successful. All the rest is literature.”) He visualizes science as a kind of sieve, which lets in some subjects while excluding others:
The reason why we see the structure we do is that scientists act like a sieve and focus only on those phenomena that have structure and are predictable. They do not take into account all phenomena; rather, they select those phenomena they can deal with…Scientists have classified the general textures and heights of different types of clouds, but, in general, are not at all interested in the exact shape of a cloud. Although the shape is a physical phenomenon, scientists don’t even attempt to study it. Science does not study all physical phenomena. Rather, science studies predictable physical phenomena. It is almost a tautology: science predicts predictable phenomena.
Yanofsky groups these criteria under the general heading “symmetry,” and he concludes: “The physicist must be a sieve and study those phenomena that possess symmetry and allow those that do not possess symmetry to slip through her fingers.” I won’t get into the rest of his argument, which draws an ingenious analogy from mathematics, except to say that it’s worth reading in its entirety. But I think his thesis is sound, and it ties into many issues that I’ve discussed here before, particularly about the uncomfortable status of the social sciences.
If you’re trying to catch this process in action, though, the trouble is that the boundaries of science aren’t determined by a general vote, or even by the work of isolated geniuses, but emerge gradually and invisibly from the contributions of countless individuals. But if I were a historian of science, I’d take a close look at the development of science fiction, in which an analogous evolution occurred in plain sight over a relatively short period of time. You can see it clearly in the career of the editor John W. Campbell, who remained skeptical of the social sciences, but whose signal contribution to the genre may have been to put them at its center. And the “sieve” that he ended up using is revealing in itself. A significant turning point was the arrival on his desk of Robert A. Heinlein’s landmark novella “If This Goes On—,” of which Campbell wrote in 1939:
Robert Heinlein, in his “If This Goes On—,” presents a civilization in which mob psychology and propaganda have become sciences. They aren’t, yet…Psychology isn’t a science, so long as a trained psychologist does—and must—say “there’s no telling how an individual man will react to a given stimulus.” Properly developed, psychology could determine that.
As an editor, Campbell began to impose psychological and sociological elements onto stories where they didn’t always fit, much as he would gratuitously insert references to uranium-235 during World War II. He irritated Isaac Asimov, for instance, by asking him to add a section to the story “Homo Sol” about “certain distinctions between the emotional reactions of Africans and Asians as compared with those of Americans and Europeans.” Asimov saw this as an early sign of Campbell’s racial views, and perhaps it was, but it pointed just as convincingly to his interest in mass psychology.
And readers took notice at a surprisingly early stage. In the November 1940 issue of Astounding, a fan named Lynn Bridges presciently wrote:
The Astounding Science Fiction of the past year has brought forth a new type of story, best described, perhaps, as “sociological” science fiction. The spaceships…are still present, but more emphasis has been placed on the one item which will have more to do with shaping the future than anything else, that strange race of bipeds known as man…Both Asimov [in “Homo Sol”] and Heinlein [in “If This Goes On—”] treat psychology as an exact science, usable in formulas, certain in results. I feel called upon to protest. Its very nature prevents psychology from achieving the exactness of mathematics…The moment men stop varying and the psychologist can say definitely that all men are alike psychologically, progress stops and the world becomes a very boring Utopia.
Campbell responded: “Psychology could improve a lot, though, without becoming dangerously oppressive!” Just two months later, in a letter in the January 1941 issue, Asimov referred to the prospect of “mathematical psychology”: “If we can understand Einstein and Hitler down to the mathematical whys and wherefores, we might try to boost along a few Einsteins and cut down on a few Hitlers, and progress might really get going.” Campbell replied much as before: “Psychology isn’t an exact science—but it can be.” Implicit in the whole discussion was the question of whether psychology could be tackled using the same hard-headed engineering approach that had worked for the genre before. And as I’ve written elsewhere, the evolution of Campbellian science fiction is largely one of writers who were so good at lecturing us about engineering that we barely even noticed when they moved on to sociology.
But what interests me now is the form it took in Astounding, which looks a lot like the sieve that Yanofsky describes. Campbell may have hoped that psychology would learn how to predict “how an individual man will react to a given stimulus,” but he seems to have sensed that this wouldn’t be credible or interesting in fiction. Instead, he turned to two subsets of psychology that were more suited to the narrative tools at his disposal. One was the treatment of simplified forms of human personality—say, for instance, in a robot. The other was the treatment of large masses of individuals. Crucially, neither was necessarily more possible than predicting the behavior of individuals, but they had the advantage that they could be more plausibly treated in fiction. Campbell’s preferred instrument at the time was Asimov, who was reliable, willing to take instruction, and geographically close enough to talk over ideas in person. As a result, Asimov’s most famous stories can be read as a series of experiments to see how the social sciences could be legitimately explored by the genre. The Three Laws of Robotics, which Campbell was the first to explicitly formulate, are really a simplified model of human behavior: Campbell later wrote that they were essentially “the basic desires of a small child, with the exception that the motivation of desire for love has been properly omitted.” At the other end of the spectrum, psychohistory looks for laws that can be applied on a mass scale, and it’s central not only to the Foundation series but even to “Nightfall,” with its theme of the cyclical rise and fall of civilizations. In science, you could draw a parallel to artificial intelligence and macroeconomics, which represent two extremes at which qualities of symmetry and predicability seem to enter the realm of psychology. In between, there’s a vast terrain of human experience that Campbell was never quite able to tackle, and that impulse ended up being channeled into dianetics. But much as science can be defined as everything that makes it through the sieve of symmetry, Campbell had a sieve of his own, and the result was the science fiction of the golden age.
The living wage
Over the last few years, we’ve observed an unexpected resurgence of interest in the idea of a universal basic income. The underlying notion is straightforward enough, as Nathan Heller summarizes it in a recent article in The New Yorker:
A universal basic income, or U.B.I., is a fixed income that every adult—rich or poor, working or idle—automatically receives from government. Unlike today’s means-tested or earned benefits, payments are usually the same size, and arrive without request…In the U.S., its supporters generally propose a figure somewhere around a thousand dollars a month: enough to live on—somewhere in America, at least—but not nearly enough to live on well.
This concept—which Heller characterizes as “a government check to boost good times or to guard against starvation in bad ones”—has been around for a long time. As one possible explanation for its current revival, Heller suggests that it amounts to “a futurist reply to the darker side of technological efficiency” as robots replace existing jobs, with prominent proponents including Elon Musk and Richard Branson. And while the present political climate in America may seem unfavorable toward such proposals, it may not stay that way forever. As Annie Lowery, the author of the new book Give People Money, recently said to Slate: “Now that Donald Trump was elected…people are really ticked off. In the event that there’s another recession, I think that the space for policymaking will expand even more radically, so maybe it is a time for just big ideas.”
These ideas are certainly big, but they aren’t exactly new, and over the last century, they’ve attracted support from some surprising sources. One early advocate was the young Robert A. Heinlein, who became interested in one such scheme while working on the socialist writer Upton Sinclair’s campaign for the governorship of California in 1934. A decade earlier, a British engineer named C.H. Douglas had outlined a plan called Social Credit, which centered on the notion that the government should provide a universal dividend to increase the purchasing power of individuals. As the Heinlein scholar Robert James writes in his afterword to the novel For Us, the Living:
Heinlein’s version of Social Credit argues that banks constantly used the power of the fractional reserve to profit by manufacturing money out of thin air, by “fiat.” Banks were (and are) required by federal law to keep only a fraction of their total loans on reserve at any time; they could thus manipulate the money supply with impunity…If you took away that power from the banks by ending the fractional reserve system, and instead let the government do the exact same thing for the good of the people, you could permanently resolve the disparities between production and consumption. By simply giving people the amount of money necessary to spring over the gap between available production and the power to consume, you could end the boom and bust business cycle permanently, and free people to pursue their own interests.
And many still argue that a universal basic income could be accomplished, at least in part, by fiat currency. As Lowery writes in her book: “Dollars are not something that the United States government can run out of.”
Heinlein addressed these issues at length in For Us, the Living, his first attempt at a novel, which, as I’ve noted elsewhere, miraculously transports a man from the present into the future mostly so that he can be subjected to interminable lectures on monetary theory. Here’s one mercifully short example, which sounds a lot like the version of basic income that you tend to hear today:
Each citizen receives a check for money, or what amounts to the same thing, a credit to each account each month, from the government. He gets this free. The money so received is enough to provide the necessities of life for an adult, or to provide everything that a child needs for its care and development. Everybody gets these checks—man, woman, and child. Nevertheless, practically everyone works pretty regularly and most people have incomes from three or four times to a dozen or more times the income they receive from the government.
Years later, Heinlein reused much of this material in his far superior novel Beyond This Horizon, which also features a man from our time who objects to the new state of affairs: “But the government simply gives away all this new money. That’s rank charity. It’s demoralizing. A man should work for what he gets. But forgetting that aspect for a moment, you can’t run a government that way. A government is just like a business. It can’t be all outgo and no income.” And after he remains unwilling to concede that a government and a business might serve different ends, another character politely suggests that he go see “a corrective semantician.”
At first, it might seem incongruous to hear these views from Heinlein, who later became a libertarian icon, but it isn’t as odd as it looks. For one thing, the basic concept has defenders from across the political spectrum, including the libertarian Charles Murray, who wants to replace the welfare state by giving ten thousand dollars a year directly to the people. And Heinlein’s fundamental priority—the preservation of individual freedom—remained consistent throughout his career, even if the specifics changed dramatically. The system that he proposed in For Us, the Living was meant to free people to do what they wanted with their lives:
Most professional people work regularly because they like to…Some work full time and some part time. Quite a number of people work for several eras and then quit. Some people don’t work at all—not for money at least. They have simple tastes and are content to live on their heritage, philosophers and mathematicians and poets and such. There aren’t many like that however. Most people work at least part of the time.
Twenty years later, Heinlein’s feelings had evolved in response to the Cold War, as he wrote to his brother Rex in 1960: “The central problem of today is no longer individual exploitation but national survival…and I don’t think we will solve it by increasing the minimum wage.” But such a basic income might also serve as a survival tactic in itself. As Heller writes in The New Yorker, depending on one’s point of view, it can either be “a clean, crisp way of replacing gnarled government bureaucracy…[or] a stay against harsh economic pressures now on the horizon.”
A potent force of disintegration
As part of the production process these days, most nonfiction books from the major publishing houses get an automatic legal read—a review by a lawyer that is intended to check for anything potentially libelous about any living person. We can’t stop anyone from suing us, but we can make sure that we haven’t gone out of our way to invite it, and while most of the figures in Astounding have long since passed on, there are a handful who are still with us. As a result, I recently spent some time going over the relevant sections with a lawyer on the phone. The person on whom we ended up focusing the most, perhaps not surprisingly, was Harlan Ellison, who had a deserved reputation for being litigious, although he also liked to point out that he usually came out ahead. (After suing America Online for not promptly removing some of his stories that had been uploaded to a newsgroup on Usenet, Ellison explained in an interview that it was really about “slovenliness of thinking on the web” and the “slacker” philosophy that everything in life should be free: “If a professional gets published, well, any thief can steal it, and post it, and the thug feels abused if you whack him for it.” Ellison eventually received a settlement.) Mindful of this, we slowly went over the manuscript, checking each statement against its primary sources. Toward the end, the lawyer asked me if we had reasonable grounds for the sentence that described Ellison as “combative.” I replied: “Yes.”
Ellison died yesterday, and I never met or even corresponded with him, which is perhaps my greatest regret from the writing of Astounding. Two years ago, when I was just getting started, I wrote to him explaining the project and asking if I could interview him, but I never heard back. I don’t know if he ever saw the letter, and a mutual acquaintance told me that he was already too ill to respond to most of his mail. Ellison persists in the book as a kind of wraith in the background, appearing unexpectedly at various points in the narrative while trying to force his way into others. In an interview from the late seventies, he even claimed to have been in the room on the evening that L. Ron Hubbard came up with dianetics:
We were sitting around one night…who else was there? Alfred Bester, and Cyril Kornbluth, and Lester del Rey, and Ron Hubbard, who was making a penny a word, and had been for years…And somebody said, “Why don’t you invent a new religion? They’re always big.” We were clowning! You know, “Become Elmer Gantry! You’ll make a fortune!” He says, “I’m going to do it.” Sat down, stole a little bit from Freud, stole a little bit from Jung, a little bit from Adler…threw it all together, invented a few new words, because he was a science fiction writer, you know, “engrams” and “regression,” all that bullshit.
At the point at which this alleged event would have taken place, Ellison was a teenage kid living in Ohio. As another science fiction writer said to me: “Sometimes Harlan operates out of his own reality, which is always interesting but not necessarily identical to anybody else’s.”
Ellison may have never met Hubbard, but he interacted to one extent or another with the other subjects of my book, who often seemed bewildered by him—and I think it’s fair to say that he was the only science fiction writer of his generation who could plausibly seem like their match. He was very close to Asimov, while his relationship with Heinlein was cordial but distant, and John W. Campbell seems to viewed him mostly as an irritant. On April 15, 1958, Ellison, who was twenty-four, wrote in a letter to Campbell: “From the relatively—doubly—safe position of being eight hundred miles removed from your grasp and logic, and being fairly certain I’ll never sell to you anyhow, I wish to make a comment…lost in the wilderness.” After complaining about a story by Murray Leinster, which he described as a blatant example of “Campbell push-buttoning,” he continued:
Now writing to Campbell is not bad. It has been the policy of Astounding since I was in rompers, and anything that produces the kind of stuff ASF does, must have merit. But I look with sincere alarm at the ridiculous trend in the magazine currently: writing stories with the psi factor used when plotting or solving the problem becomes too wearying. Leinster has done it. Several others have done it also. I note this for your information. You may crucify me at will, Greeley.
Ellison, who was stationed at the time in Fort Knox, Kentucky, signed the letter “with respect and friendliness.” No response from Campbell survives.
Ellison had a point about the direction in which Campbell was taking the magazine, and he never had any reason to revise his opinion. Nearly a decade later, in the groundbreaking anthology Dangerous Visions, he mocked the editor’s circle of subservient writers and spoke of “John W. Campbell, Jr., who used to edit a magazine that ran science fiction, called Astounding, and who now edits a magazine that runs a lot of schematic drawings, called Analog.” He did sell one story to Campbell, “Brillo,” a collaboration with Ben Bova that was supposed to be sent using a pseudonym, but was accidentally submitted under both of their names. But the editor’s feelings about Ellison were never particularly warm. Campbell once wrote to a correspondent: “In my terms, Ellison seems more of the Hitler-Genghis Khan type genius—he’s destructive, rather than constructive. The language lacks an adequate term for this type of entity; he’s not a hero, but an antihero means something more on the order of a hopeless, helpless slob than a potent force of disintegration.” He wrote elsewhere that Ellison needed “a muzzle more than a platform,” and another letter includes the amazing—but not atypical—lines: “I don’t know whether it’s the hyper-defensive attitude of the undersize or what, but [Ellison’s] an insulting little squirt with a nasty tongue. He’s one of the type that earned the appellation ‘kike’; as Einstein, Disraeli, and thousands of others have demonstrated, it ain’t racial—it’s personal.” Ellison never saw these letters, and as I transcribed them for the book, I wondered what he would think. There’s no way of knowing now. But I suspect that he would have liked it.
The Big One
In a heartfelt appreciation of the novelist Philip Roth, who died earlier this week, the New York Times critic Dwight Garner describes him as “the last front-rank survivor of a generation of fecund and authoritative and, yes, white and male novelists…[that] included John Updike, Norman Mailer and Saul Bellow.” These four names seem fated to be linked together for as long as any of them is still read and remembered, and they’ve played varying roles in my own life. I was drawn first to Mailer, who for much of my adolescence was my ideal of what a writer should be, less because of his actual fiction than thanks to my repeated readings of the juiciest parts of Peter Manso’s oral biography. (If you squint hard and think generously, you can even see Mailer’s influence in the way I’ve tried to move between fiction and nonfiction, although in both cases it was more a question of survival.) Updike, my favorite, was a writer I discovered after college. I agree with Garner that he probably had the most “sheer talent” of them all, and he represents my current model, much more than Mailer, of an author who could apparently do anything. Bellow has circled in and out of my awareness over the years, and it’s only recently that I’ve started to figure out what he means to me, in part because of his ambiguous status as a subject of biography. And Roth was the one I knew least. I’d read Portnoy’s Complaint and one or two of the Zuckerman novels, but I always felt guilty over having never gotten around to such late masterpieces as American Pastoral—although the one that I should probably check out first these days is The Plot Against America.
Yet I’ve been thinking about Roth for about as long as I’ve wanted to be a writer, largely because he came as close as anyone ever could to having the perfect career, apart from the lack of the Nobel Prize. He won the National Book Award for his debut at the age of twenty-six; he had a huge bestseller at an age when he was properly equipped to enjoy it; and he closed out his oeuvre with a run of major novels that critics seemed to agree were among the best that he, or anyone, had ever written. (As Garner nicely puts it: “He turned on the afterburners.”) But he never seemed satisfied by his achievement, which you can take as an artist’s proper stance toward his work, a reflection of the fleeting nature of such rewards, a commentary on the inherent bitterness of the writer’s life, or all of the above. Toward the end of his career, Roth actively advised young writers not to become novelists, and in his retirement announcement, which he delivered almost casually to a French magazine, he quoted Joe Louis: “I did the best I could with what I had.” A month later, in an interview with Charles McGrath of the New York Times, he expanded on his reasoning:
I know I’m not going to write as well as I used to. I no longer have the stamina to endure the frustration. Writing is frustration—it’s daily frustration, not to mention humiliation. It’s just like baseball: you fail two-thirds of the time…I can’t face any more days when I write five pages and throw them away. I can’t do that anymore…I knew I wasn’t going to get another good idea, or if I did, I’d have to slave over it.
And on his computer, he posted a note that gave him strength when he looked at it each day: “The struggle with writing is over.”
Roth’s readers, of course, rarely expressed the same disillusionment, and he lives most vividly in my mind as a reference point against which other authors could measure themselves. In an interview with The Telegraph, John Updike made one of the most quietly revealing statements that I’ve ever heard from a writer, when asked if he felt that he and Roth were in competition:
Yes, I can’t help but feel it somewhat. Especially since Philip really has the upper hand in the rivalry as far as I can tell. I think in a list of admirable novelists there was a time when I might have been near the top, just tucked under Bellow. But since Bellow died I think Philip has…he’s certainly written more novels than I have, and seems more dedicated in a way to the act of writing as a means of really reshaping the world to your liking. But he’s been very good to have around as far as goading me to become a better writer.
I think about that “list of admirable novelists” all the time, and it wasn’t just a joke. In an excellent profile in The New Yorker, Claudia Roth Pierpoint memorably sketched in all the ways in which other writers warily circled Roth. When asked if the two of them were friends, Updike said, “Guardedly,” and Bellow seems to have initially held Roth at arm’s length, until his wife convinced him to give the younger writer a chance. Pierpont concludes of the relationship between Roth and Updike: “They were mutual admirers, wary competitors who were thrilled to have each other in the world to up their game: Picasso and Matisse.”
And they also remind me of another circle of writers whom I know somewhat better. If Bellow, Mailer, Updike, and Roth were the Big Four of the literary world, they naturally call to mind the Big Three of science fiction—Heinlein, Asimov, and Clarke. In each case, the group’s members were perfectly aware of how exceptional they were, and they carefully guarded their position. (Once, in a conference call with the other two authors, Asimov jokingly suggested that one of them should die to make room for their successors. Heinlein responded: “Fuck the other writers!”) Clarke and Asimov seem to have been genuinely “thrilled to have each other in the world,” but their relationship with the third point of the triangle was more fraught. Toward the end, Asimov started to “avoid” the combative Heinlein, who had a confrontation with Clarke over the Strategic Defense Initiative that effectively ended their friendship. In public, they remained cordial, but you can get a hint of their true feelings in a remarkable passage from the memoir I. Asimov:
[Clarke] and I are now widely known as the Big Two of science fiction. Until early 1988, as I’ve said, people spoke of the Big Three, but then Arthur fashioned a little human figurine of wax and with a long pin— At least, he has told me this. Perhaps he’s trying to warn me. I have made it quite plain to him, however, that if he were to find himself the Big One, he would be very lonely. At the thought of that, he was affected to the point of tears, so I think I’m safe.
As it turned out, Clarke, like Roth, outlived all the rest, and perhaps they felt lonely in the end. Longevity can amount to a kind of victory in itself. But it must be hard to be the Big One.
Thinkers of the unthinkable
At the symposium that I attended over the weekend, the figure whose name seemed to come up the most was Herman Kahn, the futurologist and military strategist best known for his book On Thermonuclear War. Kahn died in 1983, but he still looms large over futures studies, and there was a period in which he was equally inescapable in the mainstream. As Louis Menand writes in a harshly critical piece in The New Yorker: “Herman Kahn was the heavyweight of the Megadeath Intellectuals, the men who, in the early years of the Cold War, made it their business to think about the unthinkable, and to design the game plan for nuclear war—how to prevent it, or, if it could not be prevented, how to win it, or, if it could not be won, how to survive it…The message of [his] book seemed to be that thermonuclear war will be terrible but we’ll get over it.” And it isn’t surprising that Kahn engaged in a dialogue throughout his life with science fiction. In her book The Worlds of Herman Kahn, Sharon Ghamari-Tabrizi relates:
Early in life [Kahn] discovered science fiction, and he remained an avid reader throughout adulthood. While it nurtured in him a rich appreciation for plausible possibilities, [his collaborator Anthony] Wiener observed that Kahn was quite clear about the purposes to which he put his own scenarios. “Herman would say, ‘Don’t imagine that it’s an arbitrary choice as though you were writing science fiction, where every interesting idea is worth exploring.’ He would have insisted on that. The scenario must focus attention on a possibility that would be important if it occurred.” The heuristic or explanatory value of a scenario mattered more to him than its accuracy.
Yet Kahn’s thinking was inevitably informed by the genre. Ghamari-Tabrizi, who refers to nuclear strategy as an “intuitive science,” sees hints of “the scientist-sleuth pulp hero” in On Thermonuclear War, which is just another name for the competent man, and Kahn himself openly acknowledged the speculative thread in his work: “What you are doing today fundamentally is organizing a Utopian society. You are sitting down and deciding on paper how a society at war works.” On at least one occasion, he invoked psychohistory directly. In the revised edition of the book Thinking About the Unthinkable, Kahn writes of one potential trigger for a nuclear war:
Here we turn from historical fact to science fiction. Isaac Asimov’s Foundation novels describe a galaxy where there is a planet of technicians who have developed a long-term plan for the survival of civilization. The plan is devised on the basis of a scientific calculation of history. But the plan is upset and the technicians are conquered by an interplanetary adventurer named the Mule. He appears from nowhere, a biological mutant with formidable personal abilities—an exception to the normal laws of history. By definition, such mutants rarely appear but they are not impossible. In a sense, we have already seen a “mule” in this century—Hitler—and another such “mutant” could conceivably come to power in the Soviet Union.
And it’s both frightening and revealing, I think, that Kahn—even as he was thinking about the unthinkable—doesn’t take the next obvious step, and observe that such a mutant could also emerge in the United States.
Asimov wouldn’t have been favorably inclined toward the notion of a “winnable” nuclear war, but Kahn did become friendly with a writer whose attitudes were more closely aligned with his own. In the second volume of Robert A. Heinlein: In Dialogue with His Century, William H. Patterson describes the first encounter between the two men:
By September 20, 1962, [the Heinleins] were in Las Vegas…[They] met Dr. Edward Teller, who had been so supportive of the Patrick Henry campaign, as well as one of Teller’s colleagues, Herman Kahn. Heinlein’s ears pricked up when he was introduced to this jolly, bearded fat man who looked, he said, more like a young priest than one of the sharpest minds in current political thinking…Kahn was a science fiction reader and most emphatically a Heinlein fan.
Three years later, Heinlein attended a seminar, “The Next Ten Years: Scenarios and Possibilities,” that Kahn held at the Hudson Institute in New York. Heinlein—who looked like Quixote to Kahn’s Sancho Panza—was flattered by the reception:
If I attend an ordinary cocktail party, perhaps two or three out of a large crowd will know who I am. If I go to a political meeting or a church or such, I may not be spotted at all…But at Hudson Institute, over two-thirds of the staff and over half of the students button-holed me. This causes me to have a high opinion of the group—its taste, IQ, patriotism, sex appeal, charm, etc. Writers are incurably conceited and pathologically unsure of themselves; they respond to stroking the way a cat does.
And it wasn’t just the “stroking” that Heinlein liked, of course. He admired Thinking About the Unthinkable and On Thermonuclear War, both of which would be interesting to read alongside Farnham’s Freehold, which was published just a few years later. Both Heinlein and Kahn thought about the future through stories, in a pursuit that carried a slightly disreputable air, as Kahn implied in his use of the word “scenario”:
As near as I can tell, the term scenario was first used in this sense in a group I worked with at the RAND Corporation. We deliberately choose the word to deglamorize the concept. In writing the scenarios for various situations, we kept saying “Remember, it’s only a scenario,” the kind of thing that is produced by Hollywood writers, both hacks and geniuses.
You could say much the same about science fiction. And perhaps it’s appropriate that Kahn’s most lasting cultural contribution came out of Hollywood. Along with Wernher von Braun, he was one of the two most likely models for the title character in Dr. Strangelove. Stanley Kubrick immersed himself in Kahn’s work—the two men met a number of times—and Kahn’s reaction to the film was that of a writer, not a scientist. As Ghamari-Tabrizi writes:
The Doomsday Machine was Kahn’s idea. “Since Stanley lifted lines from On Thermonuclear War without change but out of context,” Khan told reporters, he thought he was entitled to royalties from the film. He pestered him several times about it, but Kubrick held firm. “It doesn’t work that way!” he snapped, and that was that.
Checks and balances
About a third of the way through my upcoming book, while discussing the May 1941 issue of Astounding Science Fiction, I include the sentence: “The issue also featured Heinlein’s “Universe,” which was based on Campbell’s premise about a lost generation starship.” My copy editor amended this to “a lost-generation starship,” to which I replied: “This isn’t a ‘lost-generation’ starship, but a generation starship that happens to be lost.” And the exchange gave me a pretty good idea for a story that I’ll probably never write. (I don’t really have a plot for it yet, but it would be about Hemingway and Fitzgerald on a trip to Alpha Centauri, and it would be called The Double Sun Also Rises.) But it also reminded me of one of the benefits of a copy edit, which is its unparalleled combination of intense scrutiny and total detachment. I sent drafts of the manuscript to some of the world’s greatest nitpickers, who saved me from horrendous mistakes, and the result wouldn’t be nearly as good without their advice. But there’s also something to be said for engaging the services of a diligent reader who doesn’t have any connection to the subject. I deliberately sought out feedback from a few people who weren’t science fiction fans, just to make sure that it remained accessible to a wider audience. And the ultimate example is the copy editor, who is retained to provide an impartial consideration of every semicolon without any preconceived notions outside the text. It’s what Heinlein might have had in mind when he invented the Fair Witness, who said when asked about the color of a nearby house: “It’s white on this side.”
But copy editors are human beings, not machines, and they occasionally get their moment in the spotlight. Recently, their primary platform has been The New Yorker, which has been quietly highlighting the work of its copy editors and fact checkers over the last few years. We can trace this tendency back to Between You & Me, a memoir by Mary Norris that drew overdue attention to the craft of copy editing. In “Holy Writ,” a delightful excerpt in the magazine, Norris writes of the supposed objectivity and rigor of her profession: “The popular image of the copy editor is of someone who favors rigid consistency. I don’t usually think of myself that way. But, when pressed, I do find I have strong views about commas.” And she says of their famous detachment:
There is a fancy word for “going beyond your province”: “ultracrepidate.” So much of copy editing is about not going beyond your province. Anti-ultracrepidationism. Writers might think we’re applying rules and sticking it to their prose in order to make it fit some standard, but just as often we’re backing off, making exceptions, or at least trying to find a balance between doing too much and doing too little. A lot of the decisions you have to make as a copy editor are subjective. For instance, an issue that comes up all the time, whether to use “that” or “which,” depends on what the writer means. It’s interpretive, not mechanical—though the answer often boils down to an implicit understanding of commas.
In order to be truly objective, in other words, you have to be a little subjective. Which equally true of writing as a whole.
You could say much the same of the fact checker, who resembles the copy editor’s equally obsessive cousin. As a rule, books aren’t fact-checked, which is a point that we only seem to remember when the system breaks down. (Astounding was given a legal read, but I was mostly on my own when it came to everything else, and I’m grateful that some of the most potentially contentious material—about L. Ron Hubbard’s writing career—drew on an earlier article that was brilliantly checked by Matthew Giles of Longreads.) As John McPhee recently wrote of the profession:
Any error is everlasting. As Sara [Lippincott] told the journalism students, once an error gets into print it “will live on and on in libraries carefully catalogued, scrupulously indexed…silicon-chipped, deceiving researcher after researcher down through the ages, all of whom will make new errors on the strength of the original errors, and so on and on into an exponential explosion of errata.” With drawn sword, the fact-checker stands at the near end of this bridge. It is, in part, why the job exists and why, in Sara’s words, a publication will believe in “turning a pack of professional skeptics loose on its own galley proofs.”
McPhee continues: “Book publishers prefer to regard fact-checking as the responsibility of authors, which, contractually, comes down to a simple matter of who doesn’t pay for what. If material that has appeared in a fact-checked magazine reappears in a book, the author is not the only beneficiary of the checker’s work. The book publisher has won a free ticket to factual respectability.” And its absence from the publishing process feels like an odd evolutionary vestige of the book industry that ought to be fixed.
As a result of such tributes, the copy editors and fact checkers of The New Yorker have become cultural icons in themselves, and when an error does make it through, it can be mildly shocking. (Last month, the original version of a review by Adam Gopnik casually stated that Andrew Lloyd Webber was the composer of Chess, and although I knew perfectly well that this was wrong, I had to look it up to make sure that I hadn’t strayed over into a parallel universe.) And their emergence at this particular moment may not be an accident. The first installment of “Holy Writ” appeared on February 23, 2015, just a few months before Donald Trump announced that he was running for president, plunging us all into world in which good grammar and factual accuracy can seem less like matters of common decency than obstacles to be obliterated. Even though the timing was a coincidence, it’s tempting to read our growing appreciation for these unsung heroes as a statement about the importance of the truth itself. As Alyssa Rosenberg writes in the Washington Post:
It’s not surprising that one of the persistent jokes from the Trump era is the suggestion that we’re living in a bad piece of fiction…Pretending we’re all minor characters in a work of fiction can be a way of distancing ourselves from the seeming horror of our time or emphasizing our own feelings of powerlessness, and pointing to “the writers” often helps us deny any responsibility we may have for Trump, whether as voters or as journalists who covered the election. But whatever else we’re doing when we joke about Trump and the swirl of chaos around him as fiction, we’re expressing a wish that this moment will resolve in a narratively and morally comprehensible fashion.
Perhaps we’re also hoping that reality itself will have a fact checker after all, and that the result will make a difference. We don’t know if it will yet. But I’m hopeful that we’ll survive the exponential explosion of errata.
The kitsch of survival
Note: I’m taking a few days off, so I’ll be republishing some of my favorite pieces from earlier in this blog’s run. This post originally appeared, in a slightly different form, on February 7, 2017.
Last year, The New Yorker published a fascinating article by Evan Osnos on the growing survivalist movement among the very wealthy. Osnos quotes an unnamed source who estimates that fifty percent of Silicon Valley billionaires have some kind of survival plan in place—an estimate that actually strikes me as a little too low. And it may well have grown in the meantime. (As one hedge fund manager is supposed to have said: “What’s the percentage chance that Trump is actually a fascist dictator? Maybe it’s low, but the expected value of having an escape hatch is pretty high.”) Osnos also pays a visit to the Survival Condo Project, a former missile silo near Wichita that has been converted into a luxury underground bunker. It includes twelve apartments, all of which have already been sold, that prospective residents can decorate to their personal tastes:
We stopped in a condo. Nine-foot ceilings, Wolf range, gas fireplace. “This guy wanted to have a fireplace from his home state”—Connecticut—“so he shipped me the granite,” [developer Larry] Hall said. Another owner, with a home in Bermuda, ordered the walls of his bunker-condo painted in island pastels—orange, green, yellow—but, in close quarters, he found it oppressive. His decorator had to come fix it.
Osnos adds: “The condo walls are fitted with L.E.D. ‘windows’ that show a live video of the prairie above the silo. Owners can opt instead for pine forests or other vistas. One prospective resident from New York City wanted video of Central Park.”
As I read the article’s description of tastefully appointed bunkers with fake windows, it occurred to me that there’s a word that perfectly sums up most forms of survivalism, from the backwoods prepper to the wealthy venture capitalist with a retreat in New Zealand. It’s kitsch. We tend to associate the concept of kitsch with cheapness or tackiness, but on a deeper level, it’s really about providing a superficial emotional release while closing off the possibility of meaningful thought. It offers us sentimental illusions, built on clichés, in the place of real feeling. As the philosopher Roger Scruton has said: “Kitsch is fake art, expressing fake emotions, whose purpose is to deceive the consumer into thinking he feels something deep and serious.” Even more relevant is Milan Kundera’s unforgettable exploration of the subject in The Unbearable Lightness of Being, in which he observes that kitsch is the defining art form of the totalitarian state. He concludes: “Kitsch is the absolute denial of shit, in both the literal and the figurative senses of the word; kitsch excludes everything from its purview which is essentially unacceptable in human existence.” This might seem like an odd way to characterize survivalism, which is supposedly a confrontation with the unthinkable, but it’s actually a perfect description. The underling premise of survivalism is that by stocking up on beans and bullets, you can make your existence after the collapse of civilization more tolerable, even pleasant, in the face of all evidence to the contrary. It’s a denial of shit on the most fundamental level, in which a nuclear war causing the incendiary deaths of millions is sentimentalized into a playground for the competent man. And, like all kitsch, it provides a comforting daydream that allows its adherents to avoid more important questions of collective survival.
Survivalism has often been dismissed as a form of consumerism, an excuse to play Rambo with expensive guns and toys, but it also embodies a perverse form of nostalgia. The survivalist mindset is usually traced back to the Cold War, in which schoolchildren were trained to duck and cover in their classrooms while the government encouraged their parents to build fallout shelters, and it came into its own as a movement during the hyperinflation and oil shortages of the seventies. In fact, the impulse goes back at least to the days after Pearl Harbor, when an attack on the East or West Coast seemed like a genuine possibility, leading to blackout drills, volunteer air wardens, and advice on how to prepare for the worst at home. (I have a letter from John W. Campbell to Robert A. Heinlein dated December 12, 1941, in which he talks about turning his basement into a bomb shelter, complete with porch furniture and a lamp powered by a car battery, and coldly evaluates the odds of an air raid being directed at his neighborhood in New Jersey.) It’s significant that World War II was the last conflict in which the prospect of a conventional invasion of the United States—and the practical measures that one would take to prepare for it—was even halfway plausible. Faced with the possibility of the war coming to American shores, households took precautions that were basically reasonable, even if they amounted to a form of wishful thinking. And it’s a little horrifying to see how quickly these assumptions were channeled toward a nuclear war, an utterly different kind of event that makes total nonsense of individual preparations. Survivalism is a type of kitsch that looks back fondly to the times in which a war in the developed world could be fought on a human scale, rather than as an impersonal cataclysm in which the actions of ordinary men and women were rendered wholly meaningless.
Like most kinds of kitsch, survivalism reaches its nadir of tastelessness among the nouveau riche, who have the resources to indulge themselves in ways that most of us can’t afford. (Paul Fussell, in his wonderful book Class, speculated that the American bathroom is the place where the working classes express the fantasy of “What I’d Do If I Were Really Rich,” and you could say much the same thing about a fallout shelter, which is basically a bathroom with cots and canned goods.) And it makes it possible to postpone an uncomfortable confrontation with the real issues. In his article, Osnos interviews one of my heroes, the Whole Earth Catalog founder Stewart Brand, who gets at the heart of the problem:
[Brand] sees risks in escapism. As Americans withdraw into smaller circles of experience, we jeopardize the “larger circle of empathy,” he said, the search for solutions to shared problems. “The easy question is, How do I protect me and mine? The more interesting question is, What if civilization actually manages continuity as well as it has managed it for the past few centuries? What do we do if it just keeps on chugging?”
Survivalism ignores these questions, and it also makes it possible for someone like Peter Thiel, whose estate and backup citizenship in New Zealand provides him with the ultimate insurance policy, to endorse a social experiment in which millions of the less fortunate face the literal loss of their insurance. And we shouldn’t be surprised. When you look at the measures that many survivalists take, you find that they aren’t afraid of the bomb, but of other Americans—the looters, the rioters, and the leeches whom they expect to descend after the grid goes down. There’s nothing wrong with making rational preparations for disaster. But it’s only a short step from survival kits to survival kitsch.
When Clarke Met Kubrick
Note: To celebrate the fiftieth anniversary of the release of 2001: A Space Odyssey, which held its premiere on April 2, 1968, I’ll be spending the week looking at various aspects of what remains the greatest science fiction movie ever made.
“I’m reading everything by everybody,” Stanley Kubrick said one day over lunch in New York. It was early 1964, and he was eating at Trader Vic’s with Roger A. Caras, a wildlife photographer and studio publicist who was working at the time for Columbia Pictures. Dr. Strangelove had just been released, and after making small talk about their favorite brand of telescope, Caras asked the director what he had in mind for his next project. Kubrick replied that he was thinking about “something on extraterrestrials,” but he didn’t have a writer yet, and in the meantime, he was consuming as much science fiction as humanly possible. Unfortunately, we don’t know much about what he was reading, which is a frustrating omission in the career of a filmmaker whose archives have been the subject of so many exhaustive studies. In his biography of Kubrick, Vincent Lobrutto writes tantalizingly of this period: “Every day now boxes of science fiction and fact books were being delivered to his apartment. Kubrick was immersing himself in a subject he would soon know better than most experts. His capacity to grasp and disseminate information stunned many who worked with him.” Lobrutto notes that Kubrick took much the same approach a decade later on the project that became The Shining, holing up in his office with “stacks of horror books,” and the man with whom he would eventually collaborate on 2001 recalled of their first meeting: “[Kubrick] had already absorbed an immense amount of science fact and science fiction, and was in some danger of believing in flying saucers.” At their lunch that day at Trader Vic’s, however, Caras seemed to think that all of this work was unnecessary, and he told this to Kubrick in no uncertain terms: “Why waste your time? Why not just start with the best?”
Let’s pause the tape here for a moment to consider what other names Caras might plausibly have said. A year earlier, in his essay “The Sword of Achilles,” Isaac Asimov provided what we can take as a fairly representative summary of the state of the genre:
Robert A. Heinlein is usually considered the leading light among good science fiction writers. Others with a fine grasp of science and a fascinatingly imaginative view of its future possibilities are Arthur C. Clarke, Frederik Pohl, Damon Knight, James Blish, Clifford D. Smiak, Poul Anderson, L. Sprague de Camp, Theodore Sturgeon, Walter Miller, A.J. Budrys…These are by no means all.
Even accounting for the writer and the time period, there are a few noticeable omissions—it’s surprising not to see Lester del Rey, for instance, and A.E. van Vogt, who might not have qualified as what Asimov saw as “good science fiction,” had been voted one of the top four writers in the field in a pair of polls a few years earlier. It’s also necessary to add Asimov himself, who at the time was arguably the science fiction writer best known to general readers. (In 1964, he would even be mentioned briefly in Saul Bellow’s novel Herzog, which was the perfect intersection of the highbrow and the mainstream.) Arthur C. Clarke’s high ranking wasn’t just a matter of personal affection, either—he and Asimov later became good friends, but when the article was published, they had only met a handful of times. Clarke, in other words, was clearly a major figure. But it seems fair to say that anyone claiming to name “the best” science fiction writer in the field might very well have gone with Asimov or Heinlein instead.
Caras, of course, recommended Clarke, whom he had first met five years earlier at a weekend in Boston with Jacques Cousteau. Kubrick was under the impression that Clarke was a recluse, “a nut who lives in a tree in India someplace,” and after being reassured that he wasn’t, the director became excited: “Jesus, get in touch with him, will you?” Caras sent Clarke a telegram to ask about his availability, and when the author said that he was “frightfully interested,” Kubrick wrote him a fateful letter:
It’s a very interesting coincidence that our mutual friend Caras mentioned you in a conversation we were having about a Questar telescope. I had been a great admirer of your books for quite a time and had always wanted to discuss with you the possibility of doing the proverbial “really good” science-fiction movie…Roger tells me you are planning to come to New York this summer. Do you have an inflexible schedule? If not, would you consider coming sooner with a view to a meeting, the purpose of which would be to determine whether an idea might exist or arise which could sufficiently interest both of us enough to want to collaborate on a screenplay?
This account of the conversation differs slightly from Caras’s recollection—Kubrick doesn’t say that they were actively discussing potential writers for a film project, and he may have been flattering Clarke slightly with the statement that he had “always wanted” to talk about a movie with him. But it worked. Clarke wrote back to confirm his interest, and the two men finally met in New York on April 22, where the author did his best to talk Kubrick out of his newfound interest in flying saucers.
But why Clarke? At the time, Kubrick was living on the Upper East Side, which placed him within walking distance of many science fiction authors who were considerably closer than Ceylon, and it’s tempting to wonder what might have happened if he had approached Heinlein or Asimov, both of whom would have been perfectly sensible choices. A decade earlier, Heinlein made a concerted effort to break into Hollywood with the screenplays for Destination Moon and Project Moon Base, and the year before, he had written an unproduced teleplay for a proposed television show called Century XXII. (Kubrick studied Destination Moon for its special effects, if not for its story, as we learn from the correspondence of none other than Roger Caras, who had gone to work for Kubrick’s production company.) Asimov, for his part, was more than willing to explore such projects—in years to come, he would meet to discuss movies with Woody Allen and Paul McCartney, and I’ve written elsewhere about his close encounter with Steven Spielberg. But if Kubrick went with Clarke instead, it wasn’t just because they had a friend in common. At that point, Clarke was a highly respected writer, but not yet a celebrity outside the genre, and the idea of a “Big Three” consisting of Asimov, Clarke, and Heinlein was still a decade away. His talent was undeniable, but he was also a more promising candidate for the kind of working relationship that the director had in mind, which Kubrick later estimated as “four hours a day, six days a week” for more than three years. I suspect that Kubrick recognized what might best be described as a structural inefficiency in the science fiction market. The time and talents of one of the most qualified writers imaginable happened to be undervalued and available at just the right moment. When the opportunity came, Kubrick seized it. And it turned out to be one hell of a bargain.