Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Posts Tagged ‘John W. Campbell

To the stars

with 3 comments

In a few hours, if all goes according to plan, I’ll be delivering the contracted draft of Astounding: John W. Campbell, Isaac Asimov, Robert A. Heinlein, L. Ron Hubbard, and the Golden Age of Science Fiction to my publisher. Last night, I had trouble sleeping, and I found myself remembering a passage from an essay by Algis Budrys that I read at the beginning of this project:

It’s becoming increasingly obvious that we need a long, objective look at John W. Campbell, Jr. But we’re not likely to get one…Obviously, no one who knew him well enough to work for him at any length could have retained an objective view of him; the most we can hope for from that quarter would be a series of memoirs which, taken all together and read by some ideally situated observer, might distill down into some single resultant—which all its parents would disown…But, obviously, no one who failed to feel his effect, or who rebelled against his effect, or lost interest in his effect, is apt to understand matters well enough to tell us exactly what he did and how he did it. At best, we’ll hear he had feet of clay. How those feet are described by each expositor may eventually produce some sort of resultant.

Budrys wrote these words more than forty years ago, and while I can’t say that I’ve always managed to be an “ideally situated observer,” I’d like to think that I’ve occasionally come close, thanks largely to the help that I’ve received from the friends of this book, who collectively—and often individually—know far more about the subject than I ever will.

Along the way, there have also been moments when the central figures seemed to reach out and speak to me directly. In a footnote in In Memory Yet Green, the first volume of his gargantuan memoir, which I still manage to enjoy even after immersing myself in it for most of the last two years, Isaac Asimov writes:

You wouldn’t think that with this autobiography out there’d be any need for a biography, but undoubtedly there’ll be someone who will consider this record of mine so biased, so self-serving, so ridiculous that there will be need for a scholarly, objective biography to set the record straight. Well, I wish him luck.

And in a letter to Syracuse University, Campbell wrote: “Sorry, but any scholarly would-be biographers are going to have a tough time finding any useful documentation on me! I just didn’t keep the records!” (Luckily for me, he was wrong.) Heinlein probably wouldn’t have cared for this project, either. As he said of a proposed study of his career by Alexei Panshin: “I preferred not to have my total corpus of work evaluated in print until after I was dead…but in any case, I did not want a book published about me written by a kid less than half my age and one who had never written a novel himself—and especially one who had tried to pick a fight with me in the past.” And we’re not even going to talk about Hubbard yet. For now, I’m going to treat myself to a short break, wait for notes, and take a few tentative steps toward figuring out what comes next. In the meantime, I can only echo what Martin Amis wrote over three decades ago: “I knew more about Isaac Asimov than I knew about anyone else alive. What could there be left to add?”

Written by nevalalee

December 4, 2017 at 9:06 am

The notebook and the brain

leave a comment »

A little over two decades ago, the philosophers Andy Clark and David Chalmers published a paper titled “The Extended Mind.” Its argument, which no one who encounters it is likely to forget, is that the human mind isn’t confined to the bounds of the skull, but includes many of the tools and external objects that we use to think, from grocery lists to Scrabble tiles. The authors present an extended thought experiment about a man named Otto who suffers from Alzheimer’s disease, which obliges him to rely on his notebook to remember how to get to a museum. They argue that this notebook is effectively occupying the role of Otto’s memory, but only because it meets a particular set of criteria:

First, the notebook is a constant in Otto’s life—in cases where the information in the notebook would be relevant, he will rarely take action without consulting it. Second, the information in the notebook is directly available without difficulty. Third, upon retrieving information from the notebook he automatically endorses it. Fourth, the information in the notebook has been consciously endorsed at some point in the past, and indeed is there as a consequence of this endorsement.

The authors conclude: “The information in Otto’s notebook, for example, is a central part of his identity as a cognitive agent. What this comes to is that Otto himself is best regarded as an extended system, a coupling of biological organism and external resources…Once the hegemony of skin and skull is usurped, we may be able to see ourselves more truly as creatures of the world.”

When we think and act, we become agents that are “spread into the world,” as Clark and Chalmers put it, and this extension is especially striking during the act of writing. In an article that appeared just  last week in The Atlantic, “You Think With the World, Not Just Your Brain,” Sam Kriss neatly sums up the problem: “Language sits hazy in the world, a symbolic and intersubjective ether, but at the same time it forms the substance of our thought and the structure of our understanding. Isn’t language thinking for us?” He continues:

This is not, entirely, a new idea. Plato, in his Phaedrus, is hesitant or even afraid of writing, precisely because it’s a kind of artificial memory, a hypomnesis…Writing, for Plato, is a pharmakon, a “remedy” for forgetfulness, but if taken in too strong a dose it becomes a poison: A person no longer remembers things for themselves; it’s the text that remembers, with an unholy autonomy. The same criticisms are now commonly made of smartphones. Not much changes.

The difference, of course, is that our own writing implies the involvement of the self in the past, which is a dialogue that doesn’t exist when we’re simply checking information online. Clark and Chalmers, who wrote at a relatively early stage in the history of the Internet, are careful to make this distinction: “The Internet is likely to fail [the criteria] on multiple counts, unless I am unusually computer-reliant, facile with the technology, and trusting, but information in certain files on my computer may qualify.” So can the online content that we make ourselves—I’ve occasionally found myself checking this blog to remind myself what I think about something, and I’ve outsourced much of my memory to Google Photos.

I’ve often written here about the dialogue between our past, present, and future selves implicit in the act of writing, whether we’re composing a novel or jotting down a Post-It note. Kriss quotes Jacques Derrida on the humble grocery list: “At the very moment ‘I’ make a shopping list, I know that it will only be a list if it implies my absence, if it already detaches itself from me in order to function beyond my ‘present’ act and if it is utilizable at another time.” And I’m constantly aware of the book that I’m writing as a form of time travel. As I mentioned last week, I’m preparing the notes, which means that I often have to make sense of something that I wrote down over two years ago. There are times when the presence of that other self is so strong that it feels as if he’s seated next to me, even as I remain conscious of the gap between us. (For one thing, my past self didn’t know nearly as much about John W. Campbell.) And the two of us together are wiser, more effective, and more knowledgeable than either one of us alone, as long as we have writing to serve as a bridge between us. If a notebook is a place for organizing information that we can’t easily store in our heads, that’s even more true of a book written for publication, which serves as a repository of ideas to be manipulated, rearranged, and refined over time. This can lead to the odd impression that your book somehow knows more than you do, which it probably does. Knowledge is less about raw data than about the connections between them, and a book is the best way we have for compiling our moments of insight in a form that can be processed more or less all at once. We measure ourselves against the intelligence of authors in books, but we’re also comparing two fundamentally different things. Whatever ideas I have right now on any given subject probably aren’t as good as a compilation of everything that occurred to my comparably intelligent double over the course of two or three years.

This implies that most authors are useful not so much for their deeper insights as for their greater availability, which allows them to externalize their thoughts and manipulate them in the real world for longer and with more intensity than their readers can. (Campbell liked to remind his writers that the magazine’s subscribers were paying them to think on their behalf.) I often remember one of my favorite anecdotes about Isaac Asimov, which he shares in the collection Opus 100. He was asked to speak on the radio on nothing less than the human brain, on which he had just published a book. Asimov responded: “Heavens! I’m not a brain expert.” When the interviewer pointed out that he had just written an entire book on the subject, Asimov explained:

“Yes, but I studied up for the book and put in everything I could learn. I don’t know anything but the exact words in the book, and I don’t think I can remember all those in a pinch. After all,” I went on, a little aggrieved, “I’ve written books on dozens of subjects. You can’t expect me to be expert on all of them just because I’ve written books about them.”

Every author can relate to this, and there are times when “I don’t know anything but the exact words in the book” sums up my feelings about my own work. Asimov’s case is particularly fascinating because of the scale involved. By some measures, he was the most prolific author in American history, with over four hundred books to his credit, and even if we strip away the anthologies and other works that he used to pad the count, it’s still a huge amount of information. To what extent was Asimov coterminous with his books? The answer, I think, lies somewhere between “Entirely” and “Not at all,” and there was presumably more of Asimov in his memoirs than in An Easy Introduction to the Slide Rule. But he’s only an extreme version of a phenomenon that applies to every last one of us. When the radio interviewer asked incredulously if he was an expert on anything, Asimov responded: “I’m an expert on one thing. On sounding like an expert.” And that’s true of everyone. The notes that we take allow us to pose as experts in the area that matters the most—on the world around us, and even our own lives.

Written by nevalalee

October 25, 2017 at 8:40 am

When Del met Elron

with 3 comments

Last week, I posted a quote about the legendary acting teacher and performer Del Close, who is revered as one of the founders of modern improvisational comedy. (Close served as the “house metaphysician” for years on Saturday Night Live, and his students included John Belushi, Bill Murray, and Mike Meyers. He only rarely appeared on camera himself, but you might recognize him from a very peculiar cameo in one scene in The Untouchables, in which he plays the alderman who tries to bribe Eliot Ness.) While reading about his life, I also came across the interesting claim that Close had met L. Ron Hubbard sometime in the early fifties. As Kim Howard Johnson notes in the biography The Funniest One in the Room, Close was a science fiction fan in his teens in Kansas, reading such pulps as Startling Stories and making plans to publish his own fanzine, and his attention was caught by a noteworthy development in the genre: “During the summer of their sophomore year, Del introduced [a friend] to Dianetics, the book by then-science fiction author L. Ron Hubbard, and Del led them in experiments in prebirth awareness.” There was nothing particularly unusual about this—dianetics was unquestionably the story of the year among fans, and a majority of readers were disposed to approach it favorably. Most teenagers in the midwest had to be content with observing the movement from a distance, but fate intervened, as Close recalled years later:

I immediately fell madly in love with [local actress Aneta Corsaut]…I was utterly enthralled with this young lady. I used to go down to Wichita—well, that’s where the bus went, then you get a bus from Wichita to Hutchinson, which is about thirty-five miles further on. That’s where I met L. Ron Hubbard, was visiting Aneta.

Hubbard had moved to Wichita at the invitation of his benefactor Don Purcell, a local real estate investor and businessman who had rescued him after the sudden implosions of the dianetics foundations in Los Angeles and Elizabeth, New Jersey. Close documented his visit to Hubbard, which seems to have taken place sometime in second half of 1951, in an autobiographical story in the comic book Wasteland, which he wrote with John Ostrander in the late eighties. I’ve gotten my hands on a copy of the issue, and it’s quite something. It opens with a dramatization of one of Close’s dreams, in which he’s living on an island with a goat, a lion, and a “mother bear.” He’s reluctant to leave, protesting that he can’t breathe water, but the goat butts him off the edge of a cliff. The scene then cuts to the auditing session in Wichita, where Hubbard, identified as “Elron,” asks Close: “Strange dream. Were you delivered with forceps?” Hubbard proposes that they check with Close’s mother, but the teenager refuses to consider it. After offering his interpretation—“Well, I don’t ordinarily deal in dreams—leave that to the psychiatrists—but this is obviously a birth dream”—Hubbard invites Close to have a fencing match. As they cross sabers, Hubbard suggests that the bear, who hums rhythmically throughout the dream, is a memory of the mother’s heartbeat, while the pressure of the goat’s horns represents her ribs. He informs Close that this will be their last auditing session, saying that he’s having “some serious difficulties with the powers that be,” and gives the unwary fan a whack across the face. Before they part ways, Hubbard muses over turning dianetics into a religion, and he’s thrilled when Close asks him to autograph his novel Death’s Deputy: “I don’t have a copy of this myself! Let me buy it off ya!” Close leaves, thinking to himself: “I feel like the goat has kicked me out again.” And the story ends there.

There’s no way to know for sure, but the account strikes me as utterly convincing, with many small details that would never occur to anyone who was simply fabricating a story. Hubbard’s suggestion that they call Close’s mother recalls an incident in the book Dianetics, in which an anonymous patient—actually John W. Campbell himself—recounted a birth memory that was then checked directly with the source:

Objective reality did not matter but this patient had a mother near at hand and objective reality was established simply by returning her in therapy to his birth. They had not communicated about it in detail. The recording of her sequence compared word for word with his sequence, detail for detail, name for name.

Hubbard had fenced with Jack Parsons in Pasadena, including one memorable incident with the woman who became his second wife, as George Pendle recounts in Strange Angel: “Hubbard, regaining his composure after the initial ferocity of the attack, fought the formidable Betty back a few steps and stopped the assault by rapping her smartly across the nose with his foil.” And Hubbard’s identification of the humming bear with the mother’s heartbeat recalls a similar lecture that Campbell gave to Frederik Pohl in 1950, after asking if he ever had migraines:

And I said, “No, I’ve never had a migraine headache,” and [Campbell] said, “Most people do, and I know how they’re caused—they’re caused by the fetal memory. Because in the womb of the mother, there are these rhythmic sounds. There’s this slow one”—the food gurgling down her intestinal canal or something—“and a rapid one which is her heartbeat.” And he beat them out simultaneously on the desk and I got the damnedest headache I ever had in my life.

The comic is also filled with numerous touches that aren’t conclusive in themselves, but which ring very true, like the fact that Close asks Hubbard to sign a copy of Death’s Deputy. (It’s probably Hubbard’s best novel, but it’s fallen into obscurity, and it isn’t a title that would occur to most people.) Johnson’s biography of Close takes it as an accurate representation:

The comic book story agrees with the accounts Del would give to friends of his time with Hubbard. In his later years, Del would explain that Hubbard cured his asthma in 1951 at the Witchita Dianetics Foundation; however, Del also said that Hubbard taught him to smoke Kools. He claimed that Hubbard was always complaining about the AMA and the IRS, reiterating his desire to start a religion. His retellings of his experiences with Hubbard remained consistent, and there is little doubt he was being truthful.

If anything, those Kools might be the most convincing detail of all—they were Hubbard’s cigarette of choice from at least the early fifties until his death. Close’s account is particularly valuable because it’s one of the few outside glimpses we have of Hubbard during a crucial period in his career, when he was transitioning from dianetics into what would soon become the Church of Scientology. If Close can be trusted, the transformation into a religion was on the founder’s mind as early as 1951, which is a useful data point—its earliest prior appearance in the public record was a letter from Hubbard to Helen O’Brien, dated April 10, 1953, in which he wrote: “I await your reaction on the religion angle.” Which doesn’t mean that it was a coherent plan. Hubbard rarely seemed to know what he was doing from one week to the next, and for most of his improbable life, he was improvising.

The men who sold the moonshot

with 3 comments

When you ask Google whether we should build houses on the ocean, it gives you a bunch of results like these. If you ask Google X, the subsidiary within the company responsible for investigating “moonshot” projects like self-driving cars and space elevators, the answer that you get is rather different, as Derek Thompson reports in the cover story for this month’s issue of The Atlantic:

Like a think-tank panel with the instincts of an improv troupe, the group sprang into an interrogative frenzy. “What are the specific economic benefits of increasing housing supply?” the liquid-crystals guy asked. “Isn’t the real problem that transportation infrastructure is so expensive?” the balloon scientist said. “How sure are we that living in densely built cities makes us happier?” the extradimensional physicist wondered. Over the course of an hour, the conversation turned to the ergonomics of Tokyo’s high-speed trains and then to Americans’ cultural preference for suburbs. Members of the team discussed commonsense solutions to urban density, such as more money for transit, and eccentric ideas, such as acoustic technology to make apartments soundproof and self-driving housing units that could park on top of one another in a city center. At one point, teleportation enjoyed a brief hearing.

Thompson writes a little later: “I’d expected the team at X to sketch some floating houses on a whiteboard, or discuss ways to connect an ocean suburb to a city center, or just inform me that the idea was terrible. I was wrong. The table never once mentioned the words floating or ocean. My pitch merely inspired an inquiry into the purpose of housing and the shortfalls of U.S. infrastructure. It was my first lesson in radical creativity. Moonshots don’t begin with brainstorming clever answers. They start with the hard work of finding the right questions.”

I don’t know why Thompson decided to ask about “oceanic residences,” but I read this section of the article with particular interest, because about two years ago, I spent a month thinking about the subject intensively for my novella “The Proving Ground.” As I’ve described elsewhere, I knew early on in the process that it was going to be a story about the construction of a seastead in the Marshall Islands, which was pretty specific. There was plenty of background material available, ranging from general treatments of the idea in books like The Millennial Project by Marshall T. Savage—which had been sitting unread on my shelf for years—to detailed proposals for seasteads in the real world. The obvious source was The Seasteading Institute, a libertarian pipe dream funded by Peter Thiel that generated a lot of useful plans along the way, as long as you saw it as the legwork for a science fiction story, rather than as a project on which you were planning to actually spend fifty billion dollars. The difference between most of these proposals and the brainstorming session that Thompson describes is that they start with a floating city and then look for reasons to justify it. Seasteading is a solution in search of a problem. In other words, it’s science fiction, which often starts with a premise or setting that seems like it would lead to an exciting story and then searches for the necessary rationalizations. (The more invisible the process, the better.) And this can lead us to troubling places. As I’ve noted before, Thiel blames many of this country’s problems on “a failure of imagination,” and his nostalgia for vintage science fiction is rooted in a longing for the grand gestures that it embodied: the flying car, the seastead, the space colony. As he famously said six years ago to The New Yorker: “The anthology of the top twenty-five sci-fi stories in 1970 was, like, ‘Me and my friend the robot went for a walk on the moon,’ and in 2008 it was, like, ‘The galaxy is run by a fundamentalist Islamic confederacy, and there are people who are hunting planets and killing them for fun.'”

Google X isn’t immune to this tendency—Google Glass was, if anything, a solution in search of a problem—and some degree of science-fictional thinking is probably inherent to any such enterprise. In his article, Thompson doesn’t mention science fiction by name, but the whole division is clearly reminiscent of and inspired by the genre, down to the term “moonshot” and that mysterious letter at the end of its name. (Company lore claims that the “X” was chosen as “a purposeful placeholder,” but it’s hard not to think that it was motivated by the same impulse that gave us Dimension X, X Minus 1, Rocketship X-M, and even The X-Files.) In fact, an earlier article for The Atlantic looked at this connection in depth, and its conclusions weren’t altogether positive. Three years ago, in the same publication, Robinson Meyer quoted a passage from an article in Fast Company about the kinds of projects favored by Google X, but he drew a more ambivalent conclusion:

A lot of people might read that [description] and think: Wow, cool, Google is trying to make the future! But “science fiction” provides but a tiny porthole onto the vast strangeness of the future. When we imagine a “science fiction”-like future, I think we tend to picture completed worlds, flying cars, the shiny, floating towers of midcentury dreams. We tend, in other words, to imagine future technological systems as readymade, holistic products that people will choose to adopt, rather than as the assembled work of countless different actors, which they’ve always really been. The futurist Scott Smith calls these “flat-pack futures,” and they infect “science fictional” thinking.

He added: “I fear—especially when we talk about “science fiction”—that we miss the layeredness of the world, that many people worked to build it…Flying through space is awesome, but if technological advocates want not only to make their advances but to hold onto them, we have better learn the virtues of incrementalism.” (The contrast between Meyer’s skepticism and Thompson’s more positive take feels like a matter of access—it’s easier to criticize Google X’s assumptions when it’s being profiled by a rival magazine.)

But Meyer makes a good point, and science fiction’s mixed record at dealing with incrementalism is a natural consequence of its origins in popular fiction. A story demands a protagonist, which encourages writers to see scientific progress in terms of heroic figures. The early fiction of John W. Campbell returns monotonously to the same basic plot, in which a lone genius discovers atomic power and uses it to build a spaceship, drawing on the limitless resources of a wealthy and generous benefactor. As Isaac Asimov noted in his essay “Big, Big, Big”:

The thing about John Campbell is that he liked things big. He liked big men with big ideas working out big applications of their big theories. And he liked it fast. His big men built big weapons within days; weapons that were, moreover, without serious shortcomings, or at least, with no shortcomings that could not be corrected as follows: “Hmm, something’s wrong—oh, I see—of course.” Then, in two hours, something would be jerry-built to fix the jerry-built device.

This works well enough in pulp adventure, but after science fiction began to take itself seriously as prophecy, it fossilized into the notion that all problems can be approached as provinces of engineering and solved by geniuses working alone or in small groups. Elon Musk has been compared to Tony Stark, but he’s really the modern incarnation of a figure as old as The Skylark of Space, and the adulation that he still inspires shades into beliefs that are even less innocuous—like the idea that our politics should be entrusted to similarly big men. Writing of Google X’s Rapid Evaluation team, Thompson uses terms that would have made Campbell salivate: “You might say it’s Rapid Eval’s job to apply a kind of future-perfect analysis to every potential project: If this idea succeeds, what will have been the challenges? If it fails, what will have been the reasons?” Science fiction likes to believe that it’s better than average at this kind of forecasting. But it’s just as likely that it’s worse.

Written by nevalalee

October 11, 2017 at 9:02 am

The flicker effect

with 4 comments

In 1953, the neurologist and roboticist W. Grey Walter published an extraordinary book titled The Living Brain. Among his many other accomplishments, Walter was a pioneer in the use of the electroencephalograph to study the brain’s electrical activity, which was described here for the first time for a wide popular audience, although his book become more famous for the chapter “Revelation by Flicker.” It described how stroboscopic light could produce epileptic seizures and other neurological reactions, including one particularly memorable anecdote: “A man found that when he went to the cinema he would suddenly feel an irresistible impulse to strangle the person next to him.” And when Walter tested the equipment on his own team, he became aware of some unusual effects:

In the biological sciences it is a good principle to be your own rabbit, to experiment on yourself; in electroencephalography the practice is widespread, convenient, and harmless. Whenever a new instrument is to be tested or calibrated, normal subjects from among the laboratory staff are used as “signal generators”…When we started to use high-power electronic stroboscopes to generate flicker, with the aim of testing the hypothesis of resonant synchronization in epilepsy, we took a large number of records from one another while looking at the brilliant flashing light…The tests were entirely satisfactory and in fact gave us much information which will be discussed later, but as well as that we all noticed a peculiar effect. The effect was a vivid illusion of moving patterns whenever one closed one’s eyes and allowed the flicker to shine through the eyelids.

Walter characterized these patterns as “whirling spirals, whirlpools, explosions, Catherine wheels,” quoting an evocative passage from a memoir by Margiad Evans, a poet who suffered from epilepsy:

I lay there holding the green thumbless hand of the leaf while things clicked and machinery came to life, and commands to gasp, to open and shut my eyes, reached me from across the unseen room, as though by wireless. Lights like comets dangled before me, slow at first and then gaining a fury of speed and change, whirling color into color, angle into angle. They were all pure ultra unearthly colors, mental colors, not deep visual ones. There was no glow in them but only activity and revolution.

After investigating further, Walter concluded that the imagery wasn’t an optical illusion caused by the light, but a phenomenon that occurred within the eye or brain itself, and that it involved more than one sensory system. (Walter doesn’t mention this in particular, but after reading his description of “whirling spirals,” I was surprised that it hasn’t been more widely used to explain away the vision of the chariot—with its mysterious “wheel within a wheel”—of the prophet Ezekiel, who has been diagnosed with temporal lobe epilepsy.) And his work with strobe lights inspired a number of interested readers to try it out for themselves, although to rather different ends, in the fifties equivalent of neurohacking.

One was John W. Campbell, editor of Astounding Science Fiction. After reading The Living Brain, he wrote—but evidently never sent—a long letter to Walter himself, and he also built a “panic generator” with a flickering fluorescent tube in his basement workshop. (The idea of using flickering lights to induce hypnotism was a familiar one in the genre, and it had appeared in stories including Campbell’s short novel The Elder Gods and in L. Sprague de Camp’s “The Exalted.”) When he tried the device on his family, his wife’s throat tightened up, his stepson felt asthmatic, and his daughter’s head hurt, but it bothered Campbell for just ten seconds. He was, he proudly noted, “immune.” Writing to his father, he said that he thought that it might have therapeutic value:

The only way a human being exposed to this device can continue to think coherently is by shifting his method of thinking. He either changes his method—his frequency—or is hopelessly scrambled in panic. The device, however, doesn’t tell him to think; it simply forces him to think in some new manner. The result is that the problems he’s been denying existed, the ideas he’s been refusing to consider—all of these will now come into sight, and he’ll be forced to at least consider them. The one sure and certain thing is that he can not continue to think in the terms he has been!

This insight inspired one of Campbell’s best editorials, “The Value of Panic,” as well as a premise that he gave to G. Harry Stine, who wrote under the pen name Lee Correy. The resulting story, “Design Flaw,” was about an experimental rocket plane plagued by a series of accidents that turn out to be caused by a flashing screen that provides landing data, which accidentally interferes with the pilot’s alpha rhythms.

A few years afterward, Walter’s work had an even more striking afterlife, and it serves as a reminder of the surprising overlap in those decades between science fiction and the counterculture. On September 14, 1960, William S. Burroughs wrote enigmatically to his friend Brion Gysin: “Also will see Grey Walter when he returns from vacation.” He followed up two weeks later: “I heard Grey Walter. Most interesting and will make a flicker date with him in Bristol.” Burroughs also wrote to Walter directly about “possible therapeutic applications in drug addiction” and “the effect of flicker on the creative process,” neatly tying together the two major threads of his career. His interest in the flicker effect emerged from the same impulse that led to his ongoing dalliance with Scientology, and he often mentioned the two in the same breath in his letters. And it led Gysin and his collaborator Ian Sommerville to build the Dream Machine, a rotating cylinder with flashing slits that was viewed with the eyes closed. In an interview, Burroughs vividly described its effects: “Elaborate geometric constructions of incredible intricacy build up from multidimensional mosaic into living fireballs like the mandalas of Eastern mysticism or resolve momentarily into apparently individual images and powerfully dramatic scenes like brightly colored dreams.” And he closed in terms that echoed Margiad Evans, who had spoken of “lights like comets”:

“Flicker” creates a dazzling multiplicity of images in constantly altering relationships which makes the “collages” and “assemblages” of so-called “modern” art appear utterly ineffectual and slow. Art history is no longer being created. Art history as the enumeration of individual images ended with the direct introduction of light as the principal agents in the creation of images which have become infinitely multiple, complex and all-pervading. The comet is Light.

Sci-Fi and Si

with one comment

In 1959, the newspaper magnate Samuel I. Newhouse allegedly asked his wife Mitzi what she wanted for their upcoming wedding anniversary. When she told him that she wanted Vogue, he bought all of Condé Nast. At the time, the publishing firm was already in negotiations to acquire the titles of the aging Street & Smith, and Newhouse, its new owner, inherited this transaction. Here’s how Carol Felsenthal describes the deal in Citizen Newhouse:

For $4 million [Newhouse] bought Charm, Living for Young Homemakers, and Mademoiselle. (Also included were five sports annuals, which he ignored, allowing them to continue to operate with a minimal staff and low-overhead offices—separate from Condé Nast’s—and to earn a small but steady profit.) He ordered that Charm be folded into Glamour. Living for Young Homemakers become House & Garden Guides. Mademoiselle was allowed to survive because its audience was younger and better educated than Glamour’s; Mademoiselle was aimed at the college girl, Glamour at the secretary.

Newhouse’s eldest son, who was known as Si, joined Glamour at the age of thirty-five, and within a few years, he was promoted to oversee all the company’s magazines. When he passed away yesterday, as his obituary in the Times notes, he was a media titan “who as the owner of The New Yorker, Vogue, Vanity Fair, Architectural Digest and other magazines wielded vast influence over American culture, fashion and social taste.”

What this obituary—and all the other biographies that I’ve seen—fails to mention is that when the Newhouses acquired Street & Smith, they also bought Astounding Science Fiction. In the context of two remarkably busy lives, this merits little more than a footnote, but it was a significant event in the career of John W. Campbell and, by extension, the genre as a whole. In practice, Campbell was unaffected by the change in ownership, and he joked that he employed Condé Nast to get his ideas out, rather than the other way around. (Its most visible impact was a brief experiment with a larger format, allowing the magazine to sell ads to technical advertisers that didn’t make smaller printing plates, but the timing was lousy, and it was discontinued after two years.) But it also seems to have filled him with a sense of legitimacy. Campbell, like his father, had an uncritical admiration for businessmen—capitalism was the one orthodoxy that he took at face value—and from his new office in the Graybar Building on Lexington Avenue, he continued to identify with his corporate superiors. When Isaac Asimov tried to pick up a check at lunch, Campbell pinned his hand to the table: “Never argue with a giant corporation, Isaac.” And when a fan told him that he had written a story, but wasn’t sure whether it was right for the magazine, Campbell drew himself up: “And since when does the Condé Nast Publications, Incorporated pay you to make editorial decisions?” In fact, the change in ownership seems to have freed him up to make the title change that he had been contemplating for years. Shortly after the sale, Astounding became Analog, much to the chagrin of longtime fans.

Some readers discerned more sinister forces at work. In the memorial essay collection John W. Campbell: An Australian Tribute, the prominent fan Redd Boggs wrote: “What indulgent publisher is this who puts out and puts up with Campbell’s personal little journal, his fanzine?…One was astounded to see the magazine plunge along as hardily as ever after Condé Nast and Samuel I. Newhouse swallowed up and digested Street & Smith.” He went on to answer his own question:

We are making a mistake when we think of Analog as a science fiction magazine and of John W. Campbell as an editor. The financial backer or backers of Analog obviously do not think that way. They regard Analog first and foremost as a propaganda mill for the right wing, and Campbell as a propagandist of formidable puissance and persuasiveness. The stories, aside from those which echo Campbell’s own ideas, are only incidental to the magazine, the bait that lures the suckers. Analog’s raison d’être is Campbell’s editorials. If Campbell died, retired, or backslid into rationality, the magazine would fold instantly…

Campbell is a precious commodity indeed, a clever and indefatigable propagandist for the right wing, much superior in intelligence and persuasive powers to, say, William F. Buckley, and he works for bargain basement prices at that. And if our masters are as smart as I think they are…I feel sure that they would know how to cherish such heaven-sent gifts, even as I would.

This is an ingenious argument, and I almost want to believe it, if only because it makes science fiction seem as important as it likes to see itself. In reality, it seems likely that Si Newhouse barely thought about Analog at all, which isn’t to say that he wasn’t aware of it. His Times obituary notes: “He claimed to read every one of his magazines—they numbered more than fifteen—from cover to cover.” This conjures up the interesting image of Newhouse reading the first installment of Dune and the latest update on the Dean Drive, although it’s hard to imagine that he cared. Campbell—who must have existed as a wraith in the peripheral vision of Diana Vreeland of Vogue, who worked in the same building for nearly a decade—was allowed to run the magazine on his own, and it was tolerated as along as it remained modestly profitable. Newhouse’s own interests ran less to science fiction than toward what David Remnick describes as “gangster pictures, romantic comedies, film noir, silent comedies, the avant-garde.” (He did acquire Wired, but his most profound impact on our future was one that nobody could have anticipated—it was his idea to publish Donald Trump’s The Art of the Deal.) When you love science fiction, it can seem like nothing else matters, but it hardly registers in the life of someone like Newhouse. We don’t know what Campbell thought of him, but I suspect that he wished that they had been closer. Campbell wanted nothing more than to bring his notions, like psionics, to a wider audience, and he spent the last decade of his career with a publishing magnate within view but tantalizingly out of reach—and his name was even “Psi.”

The final problem

with 2 comments

In 1966, Howard L. Applegate, an administrator for the science fiction manuscript collection at Syracuse University, wrote to the editor John W. Campbell to ask if he would be interested in donating his papers. Campbell replied that he no longer possessed most of the original files, and he concluded: “Sorry, but any scholarly would-be biographers are going to have a tough time finding any useful documentation on me! I just didn’t keep the records!” Fortunately for me, this statement wasn’t totally true—I’ve spent the last two years combing through thousands of pages of letters, magazines, and other documents to assemble a picture of Campbell’s life, and if anything, there’s more here than any one person can absorb. I haven’t read it all, but I feel confident that I’ve looked at more of it than anyone else alive, and I often relate to what Robin W. Winks writes in his introduction to the anthology The Historian as Detective:

Historians pose to themselves difficult, even impossibly difficult, questions. Since they are reasonably intelligent and inquiring and since they do not wish to spend their lives upon a single question or line of investigation, they normally impose a time limit upon a given project or book (or the time limit is imposed for them by a “publish or perish” environment). They will invariably encounter numerous unforeseen difficulties because of missing papers, closed collections, new questions, and tangential problems; and the search through the archive, the chase after the single hoped-to-be-vital manuscript, has an excitement of its own, for that dénouement, the discovery, an answer may—one always hopes—lie in the next folio, in the next collection, in the next archive.

My work is more modest in scale than that of most academic historians, but I can understand the importance of a deadline, the hope that the next page that I read will contain a crucial piece of information, and the need for impossible questions. When I first got my hands on the microfilm reels of Campbell’s letters, I felt as if I’d stumbled across a treasure trove, and I found a lot of fascinating material that I never would have discovered otherwise. As I worked my way through the images, one inch at a time, I kept an eye on how much I had left, and as it dwindled, I felt a sinking feeling at the thought that I might never find certain answers. In fact, I never did resolve a few important issues to my satisfaction—although perhaps that wasn’t the right way to approach this particular Nachlass. In his introduction, Winks draws a telling contrast between the American and the European schools of history:

With sufficient diligence American historians can expect to find the answer—or at least an answer—to most factual or non-value questions they may choose to put to themselves. As a result, American researchers tend to begin with the questions they wish to entertain first (Did failed farmers truly move West to begin life anew in the eighteen-forties? Did immigrants reinforce older patterns of life or create new ones?), confident that the data can be found. European historians, on the other hand, are likely to begin with the available source materials first, and then look to see what legitimate questions they might ask of those sources. (Here are the private papers of Joseph Chamberlain, or of Gladstone, or of Disraeli. What do they tell me of British polities? Of Queen Victoria? Of the Jameson Raid? Of the development of British tariff policy? Of Colonial affairs? Of Ireland?)

Winks’s point is that American scholars have the advantage when it comes to sources, since there are vast archives available for every state with materials dating back to their founding. In writing about the history of science fiction, which is its own country of the mind, I’ve found that the situation is closer to what he says about European historiography. I’m far from the first person to explore this material, and I’m astounded by the diligence, depth of experience, and mastery of the facts of the fans I’ve met along the way, who have saved me from countless mistakes. In some areas, I’ve also been fortunate enough to build on the efforts of previous scholars, like Sam Moskowitz, whose book The Immortal Storm was accurately described by the fan historian Harry Warner, Jr.: “If read directly after a history of World War II, it does not seem like an anticlimax.” (I’m similarly grateful for the work of the late William H. Patterson, who did for Heinlein what I’m hoping to do for Campbell, thereby relieving me of much of the necessity of going over the same ground twice.) But there were also times at which I had to start with the available resources and see what they had to offer me. A lot of it was tedious and unrewarding, as detective work undoubtedly is in the real world. As Winks writes:

Much of the historian’s work, then, like that of the insurance investigator, the fingerprint man, or the coroner, may to the outsider seem to consist of deadening routine. Many miles of intellectual shoe leather will be used, for many metaphorical laundry lists, uninformative diaries, blank checkbooks, old telephone directories, and other trivia will stand between the researcher and his answer. Yet the routine must be pursued or the clue may be missed; the apparently false trail must be followed in order to be certain that it is false; the mute witnesses must be asked the reasons for their silence, for the piece of evidence that is missing from where one might reasonably expect to find it is, after all, a form of evidence in itself.

And the real point of asking a question is less the possibility of an answer than the motivation that it provides for you to keep digging. Winks nicely evokes the world in which the historian lives:

Precisely because the historian must turn to all possible witnesses, he is the most bookish of men. For him, no printed statement is without its interest. For him, the destruction of old cookbooks, gazetteers, road maps, Sears Roebuck catalogues, children’s books, railway timetables, or drafts of printed manuscripts, is the loss of potential evidence. Does one wish to know how the mail-order business was operated or how a Nebraska farmer might have dressed in 1930? Look to those catalogues. Does one wish to know whether a man from Washington just might have been in New York on a day in 1861 when it can be proved that he was in the capital on the day before and the day after? The timetables will help tell us of the opportunity.

But it’s only with a specific question in mind that the historian—or biographer—will bother to seek out such arcana at all, and you’re often rewarded with something that has nothing to do with the reasons why you originally looked. (Sometimes you find it on the other side of the page.) Every setback that I’ve encountered in search of a specific piece of information has opened new doors, and a question is simply the story that we tell ourselves to justify the search. The image that I like to use isn’t a private eye, but the anonymous reporter Thompson in Citizen Kane, whose boss, the shadowy Mr. Rawlston, tells him to solve the mystery of Kane’s last words: “See ‘em all! Get in touch with everybody that ever worked for him, whoever loved him, whoever hated his guts. I don’t mean go through the city directory, of course.” But that’s what you wind up doing. And as I near the end of this book, I’m haunted by what Rawlston says just before we cut to the lightning flash that illuminates the face of Susan Alexander: “It’ll probably turn out to be a very simple thing.”

%d bloggers like this: