Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Posts Tagged ‘The New Yorker

The slow road to the stars

with 3 comments

In the 1980 edition of The Whole Earth Catalog, which is one of the two or three books that I’d bring with me to a desert island, or to the moon, the editor Stewart Brand devotes three pages toward the beginning to the subject of space colonies. Most of the section is taken up by an essay, “The Sky Starts at Your Feet,” in which Brand relates why he took such an interest in an idea that seemed far removed from the hippie concerns with which his book—fairly or not—had always been associated. And his explanation is a fascinating one:

What got me interested in space colonies a few years ago was a chance remark by a grade school teacher. She said that most of her kids expected to live in space. All their lives they’d been seeing Star Trek and American and Russian space activities and drew the obvious conclusions. Suddenly I felt out of it. A generation that grew up with space, I realized, was going to lead to another generation growing up in space. Where did that leave me?

On the next page, Brand draws an even more explicit connection between space colonization and the rise of science fiction in the mainstream: “Most science fiction readers—there are estimated to be two million avid ones in the U.S.—are between the ages of 12 and 26. The first printing for a set of Star Trek blueprints and space cadet manual was 450,000. A Star Trek convention in Chicago drew 15,000 people, and a second one a few weeks later drew 30,000. They invited NASA officials and jammed their lectures.”

This sense of a growing movement left a huge impression on Brand, whose career as an activist had started with a successful campaign to get NASA to release the first picture of the whole earth taken from space. He concludes: “For these kids there’s been a change in scope. They can hold the oceans of the world comfortably in their minds, like large lakes. Space is the ocean now.” And he clearly understands that his real challenge will be to persuade a slightly older cohort of “liberals and environmentalists”—his own generation—to sign on. In typical fashion, Brand doesn’t stress just the practical side, but the new modes of life and thought that space colonization would require. Here’s my favorite passage:

In deemphasizing the exotic qualities of life in space [Gerard] O’Neill is making a mistake I think. People want to go not because it may be nicer than what they have on earth but because it will be harder. The harshness of space will oblige a life-and-death reliance on each other which is the sort of thing that people romanticize and think about endlessly but seldom get to do. This is where I look for new cultural ideas to emerge. There’s nothing like an impossible task to pare things down to essentials—from which comes originality. You can only start over from basics, and, once there, never quite in the same direction as before.

Brand also argues that the colonization project is “so big and so slow and so engrossing” that it will force the rest of civilization to take everything more deliberately: “If you want to inhabit a moon of Jupiter—that’s a reasonable dream now—one of the skills you must cultivate is patience. It’s not like a TV set or a better job—apparently cajolable from a quick politician. Your access to Jupiter has to be won—at its pace—from a difficult solar system.”

And the seemingly paradoxical notion of slowing down the pace of society is a big part of why Brand was so drawn to O’Neill’s vision of space colonies. Brand had lived through a particularly traumatic period in what the business writer Peter Drucker called “the age of discontinuity,” and he expressed strong reservations about the headlong rush of societal change:

The shocks of this age are the shocks of pace. Change accelerates around us so rapidly that we are strangers to our own pasts and even more to our futures. Gregory Bateson comments, “I think we could have handled the industrial revolution, given five hundred years.” In one hundred years we have assuredly not handled it…I feel serene when I can comfortably encompass two weeks ahead. That’s a pathological condition.

Brand’s misgivings are remarkably similar to what John W. Campbell was writing in Astounding in the late thirties: “The conditions [man] tries to adjust to are going to change, and change so darned fast that he never will actually adjust to a given set of conditions. He’ll have to adjust in a different way: he’ll adjust to an environment of change.” Both Brand and Campbell also believed, in the words of the former, that dealing with this challenge would somehow involve “the move of some of humanity into space.” It would force society as a whole to slow down, in a temporal equivalent of the spatial shift in perspective that environmentalists hoped would emerge from the first photos of the whole earth. Brand speaks of it as a project on the religious scale, and he closes: “Space exploration is grounded firmly on the abyss. Space is so impossible an environment for us soft, moist creatures that even with our vaulting abstractions we will have to move carefully, ponderously into that dazzling vacuum. The stars can’t be rushed. Whew, that’s a relief.”

Four decades later, it seems clear that the movement that Brand envisioned never quite materialized, although it also never really went away. Part of this has to do with the fact that many members of the core audience of The Whole Earth Catalog turned out to be surprisingly hostile to the idea. (Tomorrow, I’ll be taking a look at Space Colonies, a special issue of the magazine CoEvolution Quarterly that captures some of the controversy.) But the argument for space colonization as a means of applying the brakes to the relentless movement of civilization seems worth reviving, simply because it feels so counterintuitive. It certainly doesn’t seem like part of the conversation now. We’ve never gotten rid of the term “space race,” which is more likely to be applied these days to the perceived competition between private companies, as in a recent article in The New Yorker, in which Nicholas Schmidle speaks of Blue Origin, SpaceX, and Virgin Galactic as three startups “racing to build and test manned rockets.” When you privatize space, the language that you use to describe it inevitably changes, along with the philosophical challenges that it evokes. A recent book on the subject is titled The Space Barons: Elon Musk, Jeff Bezos, and the Quest to Colonize the Cosmos, which returns to the colonial terminology that early opponents of O’Neill’s ideas found so repellent. The new space race seems unlikely to generate the broader cultural shift that Brand envisioned, largely because we’ve outsourced it to charismatic billionaires who seem unlikely to take anything slowly. But perhaps even the space barons themselves can sense the problem. In the years since he wrote “The Sky Starts at Your Feet,” Brand has moved on to other causes to express the need for mankind to take a longer view. The most elegant and evocative is the Clock of the Long Now, which is designed to keep time for the next ten thousand years. After years of development, it finally seems to be coming together, with millions of dollars of funding from a billionaire who will house it on land that he owns in Texas. His name is Jeff Bezos.

The chosen ones

with one comment

In his recent New Yorker profile of Mark Zuckerberg, Evan Osnos quotes one of the Facebook founder’s close friends: “I think Mark has always seen himself as a man of history, someone who is destined to be great, and I mean that in the broadest sense of the term.” Zuckerberg feels “a teleological frame of feeling almost chosen,” and in his case, it happened to be correct. Yet this tells us almost nothing abut Zuckerberg himself, because I can safely say that most other undergraduates at Harvard feel the same way. A writer for The Simpsons once claimed that the show had so many presidential jokes—like the one about Grover Cleveland spanking Grandpa “on two non-consecutive occasions”—because most of the writers secretly once thought that they would be president themselves, and he had a point. It’s very hard to do anything interesting in life without the certainty that you’re somehow one of the chosen ones, even if your estimation of yourself turns out to be wildly off the mark. (When I was in my twenties, my favorite point of comparison was Napoleon, while Zuckerberg seems to be more fond of Augustus: “You have all these good and bad and complex figures. I think Augustus is one of the most fascinating. Basically, through a really harsh approach, he established two hundred years of world peace.”) This kind of conviction is necessary for success, although hardly sufficient. The first human beings to walk on Mars may have already been born. Deep down, they know it, and this knowledge will determine their decisions for the rest of their lives. Of course, thousands of others “know” it, too. And just a few of them will turn out to be right.

One of my persistent themes on this blog is how we tend to confuse talent with luck, or, more generally, to underestimate the role that chance plays in success or failure. I never tire of quoting the economist Daniel Kahneman, who in Thinking Fast and Slow shares what he calls his favorite equation:

Success = Talent + Luck
Great Success = A little more talent + A lot of luck

The truth of this statement seems incontestable. Yet we’re all reluctant to acknowledge its power in our own lives, and this tendency only increases as the roles played by luck and privilege assume a greater importance. This week has been bracketed by news stories about two men who embody this attitude at its most extreme. On the one hand, you have Brett Kavanaugh, a Yale legacy student who seems unable to recognize that his drinking and his professional success weren’t mutually exclusive, but closer to the opposite. He occupied a cultural and social stratum that gave him the chance to screw up repeatedly without lasting consequences, and we’re about to learn how far that privilege truly extends. On the other hand, you have yesterday’s New York Times exposé of Donald Trump, who took hundreds of millions of dollars from his father’s real estate empire—often in the form of bailouts for his own failed investments—while constantly describing himself as a self-made billionaire. This is hardly surprising, but it’s still striking to see the extent to which Fred Trump played along with his son’s story. He understood the value of that myth.

This gets at an important point about privilege, no matter which form it takes. We have a way of visualizing these matters in spatial terms—”upper class,” “lower class,” “class pyramid,” “rising,” “falling,” or “stratum” in the sense that I used it above. But true privilege isn’t spatial, but temporal. It unfolds over time, by giving its beneficiaries more opportunities to fail and recover, when those living at the edge might not be able to come back from the slightest misstep. We like to say that a privileged person is someone who was born on third base and thinks he hit a triple, but it’s more like being granted unlimited turns at bat. Kavanaugh provides a vivid reminder, in case we needed one, that a man who fits a certain profile has the freedom to make all kinds of mistakes, the smallest of which would be fatal for someone who didn’t look like he did. And this doesn’t just apply to drunken misbehavior, criminal or otherwise, but even to the legitimate failures that are necessary for the vast majority of us to achieve real success. When you come from the right background, it’s easier to survive for long enough to benefit from the effects of luck, which influences the way that we talk about failure itself. Silicon Valley speaks of “failing faster,” which only makes sense when the price of failure is humiliation or the loss of investment capital, not falling permanently out of the middle class. And as I’ve noted before, Pixar’s creative philosophy, which Andrew Stanton described as a process in which “the films still suck for three out of the four years it takes to make them,” is only practicable for filmmakers who look and sound like their counterparts at the top, which grants them the necessary creative freedom to fail repeatedly—a luxury that women are rarely granted.

This may all come across as unbelievably depressing, but there’s a silver lining, and it took me years to figure it out. The odds of succeeding in any creative field—which includes nearly everything in which the standard career path isn’t clearly marked—are minuscule. Few who try will ever make it, even if they have “a teleological frame of feeling almost chosen.” This isn’t due to a lack of drive or talent, but of time and second chances. When you combine the absence of any straightforward instructions with the crucial role played by luck, you get a process in which repeated failure over a long period is almost inevitable. Those who drop out don’t suffer from weak nerves, but from the fact that they’ve used up all of their extra lives. Privilege allows you to stay in the game for long enough for the odds to turn in your favor, and if you’ve got it, you may as well use it. (An Ivy League education doesn’t guarantee success, but it drastically increases your ability to stick around in the middle class in the meantime.) In its absence, you can find strategies of minimizing risk in small ways while increasing it on the highest levels, which just another word for becoming a bohemian. And the big takeaway here is that since the probability of success is already so low, you may as well do exactly what you want. It can be tempting to tailor your work to the market, reasoning that it will increase your chances ever so slightly, but in reality, the difference is infinitesimal. An objective observer would conclude that you’re not going to make it either way, and even if you do, it will take about the same amount of time to succeed by selling out as it would by staying true to yourself. You should still do everything that you can to make the odds more favorable, but if you’re probably going to fail anyway, you might as well do it on your own terms. And that’s the only choice that matters.

Written by nevalalee

October 3, 2018 at 8:59 am

A better place

with 2 comments

Note: Spoilers follow for the first and second seasons of The Good Place.

When I began watching The Good Place, I thought that I already knew most of its secrets. I had missed the entire first season, and I got interested in it mostly due to a single review by Emily Nussbaum of The New Yorker, which might be my favorite piece so far from one of our most interesting critics. Nussbaum has done more than anyone else in the last decade to elevate television criticism into an art in itself, and this article—with its mixture of the critical, personal, and political—displays all her strengths at their best. Writing of the sitcom’s first season finale, which aired the evening before Trump’s inauguration, Nussbaum says: “Many fans, including me, were looking forward to a bit of escapist counterprogramming, something frothy and full of silly puns, in line with the first nine episodes. Instead, what we got was the rare season finale that could legitimately be described as a game-changer, vaulting the show from a daffy screwball comedy to something darker, much stranger, and uncomfortably appropriate for our apocalyptic era.” Following that grabber of an opening, she continues with a concise summary of the show’s complicated premise:

The first episode is about a selfish American jerk, Eleanor (the elfin charmer Kristen Bell), who dies and goes to Heaven, owing to a bureaucratic error. There she is given a soul mate, Chidi (William Jackson Harper), a Senegal-raised moral philosopher. When Chidi discovers that Eleanor is an interloper, he makes an ethical leap, agreeing to help her become a better person…Overseeing it all was Michael, an adorably flustered angel-architect played by Ted Danson; like Leslie Knope, he was a small-town bureaucrat who adored humanity and was desperate to make his flawed community perfect.

There’s a lot more involved, of course, and we haven’t even mentioned most of the other key players. It’s an intriguing setup for a television show, and it might have been enough to get me to watch it on its own. Yet what really caught my attention was Nussbaum’s next paragraph, which includes the kind of glimpse into a critic’s writing life that you only see when emotions run high: “After watching nine episodes, I wrote a first draft of this column based on the notion that the show, with its air of flexible optimism, its undercurrent of uplift, was a nifty dialectical exploration of the nature of decency, a comedy that combined fart jokes with moral depth. Then I watched the finale. After the credits rolled, I had to have a drink.” She then gives away the whole game, which I’m obviously going to do here as well. You’ve been warned:

In the final episode, we learn that it was no bureaucratic mistake that sent Eleanor to Heaven. In fact, she’s not in Heaven at all. She’s in Hell—which is something that Eleanor realizes, in a flash of insight, as the characters bicker, having been forced as a group to choose two of them to be banished to the Bad Place. Michael is no angel, either. He’s a low-ranking devil, a corporate Hell architect out on his first big assignment, overseeing a prankish experimental torture cul-de-sac. The malicious chuckle that Danson unfurls when Eleanor figures it out is both terrifying and hilarious, like a clap of thunder on a sunny day. “Oh, God!” he growls, dropping the mask. “You ruin everything, you know that?”

That’s a legitimately great twist, and when I suggested to my wife—who didn’t know anything about it—that we check it out on Netflix, it was partially so that I could enjoy her surprise at that moment, like a fan of A Song of Ice and Fire eagerly watching an unsuspecting friend during the Red Wedding.

Yet I was the one who really got fooled. The Good Place became my favorite sitcom since Community, and for almost none of the usual reasons. It’s very funny, of course, but I find that the jokes land about half the time, and it settles for what Nussbaum describes as “silly puns” more often than it probably should. Many episodes are closer to freeform comedy—the kind in which the riffs have less to do with context than with whatever the best pitch happened to be in the writers room—than to the clockwork farce to which it ought to aspire. But its flaws don’t really matter. I haven’t been so involved with the characters on a series like this in years, which allows it to take risks and get away with formal experiments that would destroy a lesser show. After the big revelation in the first season finale, it repeatedly blew up its continuity, with Michael resetting the memories of the others and starting over whenever they figured out his plan, but somehow, it didn’t leave me feeling jerked around. This is partially thanks to how the show cleverly conflates narrative time with viewing time, which is one of the great unsung strengths of the medium. (When the second season finally gets on track, these “versions” of the characters have only known one another for a couple of weeks, but every moment is enriched by our memories of their earlier incarnations. It’s a good trick, but it’s not so different from the realization, for example, that all of the plot twists and relationships of the first two seasons of Twin Peaks unfolded over less than a month.) It also speaks to the talent of the cast, which consistently rises to every challenge. And it does a better job of telling a serialized story than any sitcom that I can remember. Even while I was catching up with it, I managed to parcel it out over time, but I can also imagine binging an entire season at one sitting. That’s mostly due to the fact that the writers are masters of structure, if not always at filling the spaces between act breaks, but it’s also because the stakes are literally infinite.

And the stakes apply to all of us. It’s hard to come away from The Good Place without revisiting some of your assumptions about ethics, the afterlife, and what it means to be a good person. (The inevitable release of The Good Place and Philosophy might actually be worth reading.) I’m more aware of how much I’ve internalized the concept of “moral desert,” or the notion that good behavior will be rewarded, which we should all know by now isn’t true. In its own unpretentious way, the series asks its viewers to contemplate the problem of how to live when there might not be a prize awaiting us at the end. It’s the oldest question imaginable, but it seems particularly urgent these days, and the show’s answers are more optimistic than we have any right to expect. Writing just a few weeks after the inauguration, Nussbaum seems to project some of her own despair onto creator Michael Schur:

While I don’t like to read the minds of showrunners—or, rather, I love to, but it’s presumptuous—I suspect that Schur is in a very bad mood these days. If [Parks and Recreation] was a liberal fantasia, The Good Place is a dystopian mindfork: it’s a comedy about the quest to be moral even when the truth gets bent, bullies thrive, and sadism triumphs…Now that his experiment has crashed, [the character of] Michael plans to erase the ensemble’s memories and reboot. The second season—presuming the show is renewed (my mouth to God’s ear)—will start the same scheme from scratch. Michael will make his afterlife Sims suffer, no matter how many rounds it takes.

Yet in the second season hinges on an unlikely change of heart. Michael comes to care about his charges—he even tries to help them escape to the real Good Place—and his newfound affection doesn’t seem like another mislead. I’m not sure if I believe it, but I’m still grateful. It isn’t a coincidence that Michael shares his name with the show’s creator, and I’d like to think that Schur ended up with a kinder version of the series than he may have initially envisioned. Like Nussbaum, he tore up the first draft and started over. Life is hard enough as it is, and the miracle of The Good Place is that it takes the darkest view imaginable of human nature, and then it gently hints that we might actually be capable of becoming better.

Written by nevalalee

September 27, 2018 at 8:39 am

The sin of sitzfleisch

leave a comment »

Yesterday, I was reading the new profile of Mark Zuckerberg by Evan Osnos in The New Yorker when I came across one of my favorite words. It appears in a section about Zuckerberg’s wife, Priscilla Chan, who describes her husband’s reaction to the recent controversies that have swirled around Facebook:

When I asked Chan about how Zuckerberg had responded at home to the criticism of the past two years, she talked to me about Sitzfleisch, the German term for sitting and working for long periods of time. “He’d actually sit so long that he froze up his muscles and injured his hip,” she said.

Until now, the term sitzfleisch, or literally “buttocks,” was perhaps most widely known in chess, in which it evokes the kind of stoic, patient endurance capable of winning games by making one plodding move after another, but you sometimes see it in other contexts as well. Just two weeks ago, Paul Joyce, a lecturer in German at Portsmouth University, was quoted in an article by the BBC: “It’s got a positive sense, [it] positively connotes a sense of endurance, reliability, not just flitting from one place to another, but it is also starting to be questioned as to whether it matches the experience of the modern world.” Which makes it all the more striking to hear it applied to Zuckerberg, whose life’s work has been the systematic construction of an online culture that makes such virtues seem obsolete.

The concept of sitzfleisch is popular among writers—Elizabeth Gilbert has a nice blog post on the subject—but it also has its detractors. A few months ago, I posted a quote from Twilight of the Idols in which Friedrich Nietzsche comes out strongly against the idea. Here’s the full passage, which appears in a section of short maxims and aphorisms:

On ne peut penser et écrire qu’assis (G. Flaubert). Now I’ve got you, you nihilist! Sitting still [sitzfleisch] is precisely the sin against the holy ghost. Only thoughts which come from walking have any value.

The line attributed to Flaubert, which can be translated as “One can think and write only when sitting down,” appears to come from a biographical sketch by Guy de Maupassant. When you read it in context, you can see why it irritated Nietzsche:

From his early infancy, the two distinctive traits of [Flaubert’s] nature were great ingenuousness and a dislike of physical action. All his life he remained ingenuous and sedentary. He could not see any one walking or moving about near him without becoming exasperated; and he would declare in his sharp voice, sonorous and always a little theatrical, that motion was not philosophical. “One can think and write only when seated,” he would say.

On some level, Nietzsche’s attack on sitzfleisch feels like a reaction against his own inescapable habits—he can hardly have written any of his books without the ability to sit in solitude for long periods of time. I’ve noted elsewhere that the creative life has to be conducted both while seated and while engaging in other activities, and that your course of action at any given moment can be guided by whether or not you happen to be sitting down. And it can be hard to strike the right balance. We have to spend time at a desk in order to write, but we often think better by walking, going outside, and pointedly not checking Facebook. In the recent book Nietzsche and Montaigne, the scholar Robert Miner writes:

Both Montaigne and Nietzsche strongly favor mobility over sedentariness. Montaigne is a “sworn enemy” of “assiduity (assiduité)” who goes “mostly on horseback, where my thoughts range most widely.” Nietzsche too finds that “assiduity (Sitzfleisch) is the sin against the Holy Spirit” but favors walking rather than riding. As Dahlkvist observes, Nietzsche may have been inspired by Beethoven’s habit of walking while composing, which he knew about from his reading of Henri Joly’s Psychologie des grand hommes.

That’s possible, but it also reflects the personal experience of any writer, who is often painfully aware of the contradiction of trying to say something about life while spending most of one’s time alone.

And Nietzsche’s choice of words is also revealing. In describing sitzfleisch as a sin against the Holy Ghost, he might have just been looking for a colorful phrase, or making a pun on a “sin of the flesh,” but I suspect that it went deeper. In Catholic dogma, a sin against the Holy Ghost is specifically one of “certain malice,” in which the sinner acts on purpose, repeatedly, and in full knowledge of his or her crime. Nietzsche, who was familiar with Thomas Aquinas, might have been thinking of what the Summa Theologica has to say on the subject:

Augustine, however…says that blasphemy or the sin against the Holy Ghost, is final impenitence when, namely, a man perseveres in mortal sin until death, and that it is not confined to utterance by word of mouth, but extends to words in thought and deed, not to one word only, but to many…Hence they say that when a man sins through weakness, it is a sin “against the Father”; that when he sins through ignorance, it is a sin “against the Son”; and that when he sins through certain malice, i.e. through the very choosing of evil…it is a sin “against the Holy Ghost.”

Sitzfleisch, in short, is the sin of those who should know better. It’s the special province of philosophers, who know exactly how badly they fall short of ordinary human standards, but who have no choice if they intend to publish “not one word only, but many.” Solitary work is unhealthy, even inhuman, but it can hardly be avoided if you want to write Twilight of the Idols. As Nietzsche notes elsewhere in the same book: “To live alone you must be an animal or a god—says Aristotle. He left out the third case: you must be both—a philosopher.”

The electric dream

with 4 comments

There’s no doubt who got me off originally and that was A.E. van Vogt…The basic thing is, how frightened are you of chaos? And how happy are you with order? Van Vogt influenced me so much because he made me appreciate a mysterious chaotic quality in the universe that is not to be feared.

—Philip K. Dick, in an interview with Vertex

I recently finished reading I Am Alive and You Are Dead, the French author Emmanuel Carrère’s novelistic biography of Philip K. Dick. In an article last year about Carrère’s work, James Wood of The New Yorker called it “fantastically engaging,” noting: “There are no references and very few named sources, yet the material appears to rely on the established record, and is clearly built from the same archival labor that a conventional biographer would perform.” It’s very readable, and it’s one of the few such biographies—along with James Tiptree, Jr. by Julie Phillips and a certain upcoming book—aimed at intelligent audience outside the fan community. Dick’s life also feels relevant now in ways that we might not have anticipated two decades ago, when the book was first published in France. He’s never been as central to me as he has for many other readers, mostly because of the accidents of my reading life, and I’ve only read a handful of his novels and stories. I’m frankly more drawn to his acquaintance and occasional correspondent Robert Anton Wilson, who ventured into some of the same dark places and returned with his sanity more or less intact. (One notable difference between the two is that Wilson was a more prolific experimenter with psychedelic drugs, which Dick, apart from one experience with LSD, appears to have avoided.) But no other writer, with one notable exception that I’ll mention below, has done a better job of forcing us to confront the possibility that our understanding of the world might be fatally flawed. And it’s quite possible that he serves as a better guide to the future than any of the more rational writers who populated the pages of Astounding.

What deserves to be remembered about Dick, though, is that he loved the science fiction of the golden age, and he’s part of an unbroken chain of influence that goes back to the earliest days of the pulps. In I Am Alive and You Are Dead, Carrère writes of Dick as a young boy: “He collected illustrated magazines with titles like Astounding and Amazing and Unknown, and these periodicals, in the guise of serious scientific discussion, introduced him to lost continents, haunted pyramids, ships that vanished mysteriously in the Sargasso Sea.” (Carrère, weirdly, puts a superfluous exclamation point at the end of the titles of all these magazines, which I’ve silently removed in these quotations.) Dick continued to collect pulps throughout his life, keeping the most valuable issues in a fireproof safe at his house in San Rafael, California, which was later blown open in a mysterious burglary. Throughout his career, Dick refers casually to classic stories with an easy familiarity that suggests a deep knowledge of the genre, as in a line from his Exegesis, in which he mentions “that C.L. Moore novelette in Astounding about the two alternative futures hinging on which of two girls the guy marries in the present.” But the most revealing connection lies in plain sight. In a section on Dick’s early efforts in science fiction, Carrère writes:

Stories about little green men and flying saucers…were what he was paid to write, and the most they offered in terms of literary recognition was comparison to someone like A.E. van Vogt, a writer with whom Phil had once been photographed at a science fiction convention. The photo appeared in a fanzine above the caption “The Old and the New.”

Carrère persistently dismisses van Vogt as a writer of “space opera,” which might be technically true, though hardly the whole story. Yet he was also the most convincing precursor that Dick ever had. The World of Null-A may be stylistically cruder than Dick at his best, but it also appeared in Astounding in 1945, and it remains so hallucinatory, weird, and undefinable that I still have trouble believing that it was read by twelve-year-olds. (As Dick once said of it in an interview: “All the parts of that book do not add up; all the ingredients did not make a coherency. Now some people are put off by that. They think it’s sloppy and wrong, but the thing that fascinated me so much was that this resembled reality more than anybody else’s writing inside or outside science fiction.”) Once you see the almost apostolic line of succession from van Vogt to Alfred Bester to Dick, the latter seems less like an anomaly within the genre than like an inextricable part of its fabric. Although he only sold one short story, “Impostor,” to John W. Campbell, Dick continued to submit to him for years, before concluding that it wasn’t the best use of his time. As Eric Leif Davin recounts in Partners in Wonder: “[Dick] said he’d rather write several first-draft stories for one cent a word than spend time revising a single story for Campbell, despite the higher pay.” And Dick recalled in his collection The Minority Report:

Horace Gold at Galaxy liked my writing whereas John W. Campbell, Jr. at Astounding considered my writing not only worthless but as he put it, “Nuts.” By and large I liked reading Galaxy because it had the broadest range of ideas, venturing into the soft sciences such as sociology and psychology, at a time when Campbell (as he once wrote me!) considered psionics a necessary premise for science fiction. Also, Campbell said, the psionic character in the story had to be in charge of what was going on.

As a result, the two men never worked closely together, although Dick had surprising affinities with the editor who believed wholeheartedly in psionics, precognition, and genetic memory, and whose magazine never ceased to play a central role in his inner life. In his biography, Carrère provides an embellished version of a recurring dream that Dick had at the age of twelve, “in which he found himself in a bookstore trying to locate an issue of Astounding that would complete his collection.” As Dick describes it in his autobiographical novel VALIS:

In the dream he again was a child, searching dusty used-book stores for rare old science fiction magazines, in particular Astoundings. In the dream he had looked through countless tattered issues, stacks upon stacks, for the priceless serial entitled “The Empire Never Ended.” If he could find it and read it he would know everything; that had been the burden of the dream.

Years later, the phrase “the empire never ended” became central to Dick’s late conviction that we were all living, without our knowledge, in the Rome of the Acts of the Apostles. But the detail that sticks with me the most is that the magazines in the dream were “in particular Astoundings.” The fan Peter Graham famously said that the real golden age of science fiction was twelve, and Dick reached that age at the end of 1940, at the peak of Campbell’s editorship. The timing was perfect for Astounding to rewire his brain forever. When Dick first had his recurring dream, he would have just finished reading a “priceless serial” that had appeared in the previous four issues of the magazine, and I’d like to think that he spent the rest of his life searching for its inconceivable conclusion. It was van Vogt’s Slan.

My ten creative books #10: A Guide for the Perplexed

with 4 comments

Note: I’m counting down ten books that have influenced the way that I think about the creative process, in order of the publication dates of their first editions. It’s a very personal list that reflects my own tastes and idiosyncrasies, and I’m always looking for new recommendations. You can find the earlier installments here.

As regular readers know, I’m a Werner Herzog fan, but not a completist—I’ve seen maybe five of his features and three or four of his documentaries, which leaves a lot of unexplored territory, and I’m not ashamed to admit that Woyzeck put me to sleep. Yet Herzog himself is endlessly fascinating. Daniel Zalewski’s account of the making of Rescue Dawn is one of my five favorite articles ever to appear in The New Yorker, and if you’re looking for an introduction to his mystique, there’s no better place to start. For a deeper dive, you can turn to A Guide for the Perplexed, an expanded version of a collection of the director’s interviews with Paul Cronin, which was originally published more than a decade ago. As I’ve said here before, I regret the fact that I didn’t pick up the first edition when I had the chance, and I feel that my life would have been subtly different if I had. Not only is it the first book I’d recommend to anyone considering a career in filmmaking, it’s almost the first book I’d recommend to anyone considering a career in anything at all. It’s huge, but every paragraph explodes with insight, and you can open it to any page and find yourself immediately transfixed. Here’s one passage picked at random:

Learn to live with your mistakes. Study the law and scrutinize contracts. Expand your knowledge and understanding of music and literature, old and modern. Keep your eyes open. That roll of unexposed celluloid you have in your hand might be the last in existence, so do something impressive with it. There is never an excuse not to finish a film. Carry bolt cutters everywhere.

Or take Herzog’s description of his relationship with his cinematographer: “Peter Zeitlinger is always trying to sneak ‘beautiful’ shots into our films, and I’m forever preventing it…Things are more problematic when there is a spectacular sunset on the horizon and he scrambles to set up the camera to film it. I immediately turn the tripod 180 degrees in the other direction.”

And this doesn’t even touch on Herzog’s stories, which are inexhaustible. He provides his own point of view on many famous anecdotes, like the time he was shot on camera while being interviewed by the BBC—the bullet was stopped by a catalog in his jacket pocket, and he asked to keep going—or how he discouraged Klaus Kinski from abandoning the production of Aguirre: The Wrath of God. (“I told him I had a rifle…and that he would only make it as far as the next bend in the river before he had eight bullets in his head. The ninth would be for me.”) We see Herzog impersonating a veterinarian at the airport to rescue the monkeys that he needed for Aguirre; forging an impressive document over the signature of the president of Peru to gain access to locations for Fitzcarraldo; stealing his first camera; and shooting oil fires in Kuwait under such unforgiving conditions that the microphone began to melt. Herzog is his own best character, and he admits that he can sometimes become “a clown,” but his example is enough to sustain and nourish the rest of us. In On Directing Film, David Mamet writes:

But listen to the difference between the way people talk about films by Werner Herzog and the way they talk about films by Frank Capra, for example. One of them may or may not understand something or other, but the other understands what it is to tell a story, and he wants to tell a story, which is the nature of dramatic art—to tell a story. That’s all it’s good for.

Herzog, believe it or not, would agree, and he recommends Casablanca and The Treasure of the Sierra Madre as examples of great storytelling. And the way in which Herzog and Capra’s reputations have diverged since Mamet wrote those words, over twenty years ago, is illuminating in itself. A Guide for the Perplexed may turn out to be as full of fabrications as Capra’s own memoirs, but they’re the kind of inventions, like the staged moments in Herzog’s “documentaries,” that get at a deeper truth. As Herzog says of another great dreamer: “The difference between me and Don Quixote is, I deliver.”

The living wage

leave a comment »

Over the last few years, we’ve observed an unexpected resurgence of interest in the idea of a universal basic income. The underlying notion is straightforward enough, as Nathan Heller summarizes it in a recent article in The New Yorker:

A universal basic income, or U.B.I., is a fixed income that every adult—rich or poor, working or idle—automatically receives from government. Unlike today’s means-tested or earned benefits, payments are usually the same size, and arrive without request…In the U.S., its supporters generally propose a figure somewhere around a thousand dollars a month: enough to live on—somewhere in America, at least—but not nearly enough to live on well.

This concept—which Heller characterizes as “a government check to boost good times or to guard against starvation in bad ones”—has been around for a long time. As one possible explanation for its current revival, Heller suggests that it amounts to “a futurist reply to the darker side of technological efficiency” as robots replace existing jobs, with prominent proponents including Elon Musk and Richard Branson. And while the present political climate in America may seem unfavorable toward such proposals, it may not stay that way forever. As Annie Lowery, the author of the new book Give People Money, recently said to Slate: “Now that Donald Trump was elected…people are really ticked off. In the event that there’s another recession, I think that the space for policymaking will expand even more radically, so maybe it is a time for just big ideas.”

These ideas are certainly big, but they aren’t exactly new, and over the last century, they’ve attracted support from some surprising sources. One early advocate was the young Robert A. Heinlein, who became interested in one such scheme while working on the socialist writer Upton Sinclair’s campaign for the governorship of California in 1934. A decade earlier, a British engineer named C.H. Douglas had outlined a plan called Social Credit, which centered on the notion that the government should provide a universal dividend to increase the purchasing power of individuals. As the Heinlein scholar Robert James writes in his afterword to the novel For Us, the Living:

Heinlein’s version of Social Credit argues that banks constantly used the power of the fractional reserve to profit by manufacturing money out of thin air, by “fiat.” Banks were (and are) required by federal law to keep only a fraction of their total loans on reserve at any time; they could thus manipulate the money supply with impunity…If you took away that power from the banks by ending the fractional reserve system, and instead let the government do the exact same thing for the good of the people, you could permanently resolve the disparities between production and consumption. By simply giving people the amount of money necessary to spring over the gap between available production and the power to consume, you could end the boom and bust business cycle permanently, and free people to pursue their own interests.

And many still argue that a universal basic income could be accomplished, at least in part, by fiat currency. As Lowery writes in her book: “Dollars are not something that the United States government can run out of.”

Heinlein addressed these issues at length in For Us, the Living, his first attempt at a novel, which, as I’ve noted elsewhere, miraculously transports a man from the present into the future mostly so that he can be subjected to interminable lectures on monetary theory. Here’s one mercifully short example, which sounds a lot like the version of basic income that you tend to hear today:

Each citizen receives a check for money, or what amounts to the same thing, a credit to each account each month, from the government. He gets this free. The money so received is enough to provide the necessities of life for an adult, or to provide everything that a child needs for its care and development. Everybody gets these checks—man, woman, and child. Nevertheless, practically everyone works pretty regularly and most people have incomes from three or four times to a dozen or more times the income they receive from the government.

Years later, Heinlein reused much of this material in his far superior novel Beyond This Horizon, which also features a man from our time who objects to the new state of affairs: “But the government simply gives away all this new money. That’s rank charity. It’s demoralizing. A man should work for what he gets. But forgetting that aspect for a moment, you can’t run a government that way. A government is just like a business. It can’t be all outgo and no income.” And after he remains unwilling to concede that a government and a business might serve different ends, another character politely suggests that he go see “a corrective semantician.”

At first, it might seem incongruous to hear these views from Heinlein, who later became a libertarian icon, but it isn’t as odd as it looks. For one thing, the basic concept has defenders from across the political spectrum, including the libertarian Charles Murray, who wants to replace the welfare state by giving ten thousand dollars a year directly to the people. And Heinlein’s fundamental priority—the preservation of individual freedom—remained consistent throughout his career, even if the specifics changed dramatically. The system that he proposed in For Us, the Living was meant to free people to do what they wanted with their lives:

Most professional people work regularly because they like to…Some work full time and some part time. Quite a number of people work for several eras and then quit. Some people don’t work at all—not for money at least. They have simple tastes and are content to live on their heritage, philosophers and mathematicians and poets and such. There aren’t many like that however. Most people work at least part of the time.

Twenty years later, Heinlein’s feelings had evolved in response to the Cold War, as he wrote to his brother Rex in 1960: “The central problem of today is no longer individual exploitation but national survival…and I don’t think we will solve it by increasing the minimum wage.” But such a basic income might also serve as a survival tactic in itself. As Heller writes in The New Yorker, depending on one’s point of view, it can either be “a clean, crisp way of replacing gnarled government bureaucracy…[or] a stay against harsh economic pressures now on the horizon.”

%d bloggers like this: