Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Posts Tagged ‘The New Yorker

The temple of doom

leave a comment »

Steven Spielberg on the set of Indiana Jones and the Temple of Doom

Note: I’m taking some time off for the holidays, so I’m republishing a few pieces from earlier in this blog’s run. This post originally appeared, in a slightly different form, on January 27, 2017.

I think America is going through a paroxysm of rage…But I think there’s going to be a happy ending in November.

Steven Spielberg, to Sky News, July 17, 2016

Last week, in an interview with the New York Times about the twenty-fifth anniversary of Schindler’s List and the expansion of the mission of The Shoah Foundation, Steven Spielberg said of this historical moment:

I think there’s a measurable uptick in anti-Semitism, and certainly an uptick in xenophobia. The racial divide is bigger than I would ever imagine it could be in this modern era. People are voicing hate more now because there’s so many more outlets that give voice to reasonable and unreasonable opinions and demands. People in the highest places are allowing others who would never express their hatred to publicly express it. And that’s been a big change.

Spielberg, it’s fair to say, remains the most quintessentially American of all directors, despite a filmography that ranges freely between cultures and seems equally comfortable in the past and in the future. He’s often called a mythmaker, and if there’s a place where his glossy period pieces, suburban landscapes, and visionary adventures meet, it’s somewhere in the nation’s collective unconscious: its secret reveries of what it used to be, what it is, and what it might be again. Spielberg country, as Stranger Things was determined to remind us, is one of small towns and kids on bikes, but it also still vividly remembers how it beat the Nazis, and it can’t resist turning John Hammond from a calculating billionaire into a grandfatherly, harmless dreamer. No other artist of the last half century has done so much to shape how we all feel about ourselves. He took over where Walt Disney left off. But what has he really done?

To put it in the harshest possible terms, it’s worth asking whether Spielberg—whose personal politics are impeccably liberal—is responsible in part for our current predicament. He taught the New Hollywood how to make movies that force audiences to feel without asking them to think, to encourage an illusion of empathy instead of the real thing, and to create happy endings that confirm viewers in their complacency. You can’t appeal to all four quadrants, as Spielberg did to a greater extent than anyone who has ever lived, without consistently telling people exactly what they want to hear. I’ve spoken elsewhere of how film serves as an exercise ground for the emotions, bringing us closer on a regular basis to the terror, wonder, and despair that many of us would otherwise experience only rarely. It reminds the middle class of what it means to feel pain or awe. But I worry that when we discharge these feelings at the movies, it reduces our capacity to experience them in real life, or, even more insidiously, makes us think that we’re more empathetic and compassionate than we actually are. Few movies have made viewers cry as much as E.T., and few have presented a dilemma further removed than anything a real person is likely to face. (Turn E.T. into an illegal alien being sheltered from a government agency, maybe, and you’d be onto something.) Nearly every film from the first half of Spielberg’s career can be taken as a metaphor for something else. But great popular entertainment has a way of referring to nothing but itself, in a cognitive bridge to nowhere, and his images are so overwhelming that it can seem superfluous to give them any larger meaning.

Steven Spielberg on the set of Jaws

If Spielberg had been content to be nothing but a propagandist, he would have been the greatest one who ever lived. (Hence, perhaps, his queasy fascination with the films of Leni Riefenstahl, who has affinities with Spielberg that make nonsense out of political or religious labels.) Instead, he grew into something that is much harder to define. Jaws, his second film, became the most successful movie ever made, and when he followed it up with Close Encounters, it became obvious that he was in a position with few parallels in the history of art—he occupied a central place in the culture and was also one of its most advanced craftsmen, at a younger age than Damien Chazelle is now. If you’re talented enough to assume that role and smart enough to stay there, your work will inevitably be put to uses that you never could have anticipated. It’s possible to pull clips from Spielberg’s films that make him seem like the cuddliest, most repellent reactionary imaginable, of the sort that once prompted Tony Kushner to say:

Steven Spielberg is apparently a Democrat. He just gave a big party for Bill Clinton. I guess that means he’s probably idiotic…Jurassic Park is sublimely good, hideously reactionary art. E.T. and Close Encounters of the Third Kind are the flagship aesthetic statements of Reaganism. They’re fascinating for that reason, because Spielberg is somebody who has just an astonishing ear for the rumblings of reaction, and he just goes right for it and he knows exactly what to do with it.

Kushner, of course, later became Spielberg’s most devoted screenwriter. And the total transformation of the leading playwright of his generation is the greatest testament imaginable to this director’s uncanny power and importance.

In reality, Spielberg has always been more interesting than he had any right to be, and if his movies have been used to shake people up in the dark while numbing them in other ways, or to confirm the received notions of those who are nostalgic for an America that never existed, it’s hard to conceive of a director of his stature for whom this wouldn’t have been the case. To his credit, Spielberg clearly grasps the uniqueness of his position, and he has done what he could with it, in ways that can seem overly studied. For the last two decades, he has worked hard to challenge some of our assumptions, and at least one of his efforts, Munich, is a masterpiece. But if I’m honest, the film that I find myself thinking about the most is Indiana Jones and the Temple of Doom. It isn’t my favorite Indiana Jones movie—I’d rank it a distant third. For long stretches, it isn’t even all that good. It also trades in the kind of casual racial stereotyping that would be unthinkable today, and it isn’t any more excusable because it deliberately harks back to the conventions of an earlier era. (The fact that it’s even watchable now only indicates how much ground East and South Asians have yet to cover.) But its best scenes are so exciting, so wonderful, and so conductive to dreams that I’ve never gotten over it. Spielberg himself was never particularly pleased with the result, and if asked, he might express discomfort with some of the decisions he made. But there’s no greater tribute to his artistry, which executed that misguided project with such unthinking skill that he exhilarated us almost against his better judgment. It tells us how dangerous he might have been if he hadn’t been so deeply humane. And we should count ourselves lucky that he turned out to be as good of a man as he did, because we’d never have known if he hadn’t.

Updike’s ladder

with one comment

Note: I’m taking the day off, so I’m republishing a post that originally appeared, in a slightly different form, on September 13, 2017.

Last year, the author Anjali Enjeti published an article in The Atlantic titled “Why I’m Still Trying to Get a Book Deal After Ten Years.” If just reading those words makes your palms sweat and puts your heart through a few sympathy palpitations, congratulations—you’re a writer. No matter where you might be in your career, or what length of time you mentally insert into that headline, you can probably relate to what Enjeti writes:

Ten years ago, while sitting at my computer in my sparsely furnished office, I sent my first email to a literary agent. The message included a query letter—a brief synopsis describing the personal-essay collection I’d been working on for the past six years, as well as a short bio about myself. As my third child kicked from inside my pregnant belly, I fantasized about what would come next: a request from the agent to see my book proposal, followed by a dream phone call offering me representation. If all went well, I’d be on my way to becoming a published author by the time my oldest child started first grade.

“Things didn’t go as planned,” Enjeti says dryly, noting that after landing and leaving two agents, she’s been left with six unpublished manuscripts and little else to show for it. She goes on to share the stories of other writers in the same situation, including Michael Bourne of Poets & Writers, who accurately calls the submission process “a slow mauling of my psyche.” And Enjeti wonders: “So after sixteen years of writing books and ten years of failing to find a publisher, why do I keep trying? I ask myself this every day.”

It’s a good question. As it happens, I first encountered her article while reading the authoritative biography Updike by Adam Begley, which chronicles a literary career that amounts to the exact opposite of the ones described above. Begley’s account of John Updike’s first acceptance from The New Yorker—just months after his graduation from Harvard—is like lifestyle porn for writers:

He never forgot the moment when he retrieved the envelope from the mailbox at the end of the drive, the same mailbox that had yielded so many rejection slips, both his and his mother’s: “I felt, standing and reading the good news in the midsummer pink dusk of the stony road beside a field of waving weeds, born as a professional writer.” To extend the metaphor…the actual labor was brief and painless: he passed from unpublished college student to valued contributor in less than two months.

If you’re a writer of any kind, you’re probably biting your hand right now. And I haven’t even gotten to what happened to Updike shortly afterward:

A letter from Katharine White [of The New Yorker] dated September 15, 1954 and addressed to “John H. Updike, General Delivery, Oxford,” proposed that he sign a “first-reading agreement,” a scheme devised for the “most valued and most constant contributors.” Up to this point, he had only one story accepted, along with some light verse. White acknowledged that it was “rather unusual” for the magazine to make this kind of offer to a contributor “of such short standing,” but she and Maxwell and Shawn took into consideration the volume of his submissions…and their overall quality and suitability, and decided that this clever, hard-working young man showed exceptional promise.

Updike was twenty-two years old. Even now, more than half a century later and with his early promise more than fulfilled, it’s hard to read this account without hating him a little. Norman Mailer—whose debut novel, The Naked and the Dead, appeared when he was twenty-five—didn’t pull any punches in “Some Children of the Goddess,” an essay on his contemporaries that was published in Esquire in 1963: “[Updike’s] reputation has traveled in convoy up the Avenue of the Establishment, The New York Times Book Review, blowing sirens like a motorcycle caravan, the professional muse of The New Yorker sitting in the Cadillac, membership cards to the right Fellowships in his pocket.” Even Begley, his biographer, acknowledges the singular nature of his subject’s rise:

It’s worth pausing here to marvel at the unrelieved smoothness of his professional path…Among the other twentieth-century American writers who made a splash before their thirtieth birthday…none piled up accomplishments in as orderly a fashion as Updike, or with as little fuss…This frictionless success has sometimes been held against him. His vast oeuvre materialized with suspiciously little visible effort. Where there’s no struggle, can there be real art? The Romantic notion of the tortured poet has left us with a mild prejudice against the idea of art produced in a calm, rational, workmanlike manner (as he put it, “on a healthy basis of regularity and avoidance of strain”), but that’s precisely how Updike got his start.

Begley doesn’t mention that the phrase “regularity and avoidance of strain” is actually meant to evoke the act of defecation, but even this provides us with an odd picture of writerly contentment. As Dick Hallorann says in The Shining, the best movie about writing ever made: “You got to keep regular, if you want to be happy.”

If there’s a larger theme here, it’s that the sheer productivity and variety of Updike’s career—with its reliable production of uniform hardcover editions over the course of five decades—are inseparable from the “orderly” circumstances of his rise. Updike never lacked a prestigious venue for his talents, which allowed him to focus on being prolific. Writers whose publication history remains volatile and unpredictable, even after they’ve seen print, don’t always have the luxury of being so unruffled, and it can affect their work in ways that are almost subliminal. (A writer can’t survive ten years of chasing after a book deal without spending the entire time convinced that he or she is on the verge of a breakthrough, anticipating an ending that never comes, which may partially account for the prevalence in literary fiction of frustration and unresolved narratives. It also explains why it helps to be privileged enough to fail for years.) The short answer to Begley’s question is that struggle is good for a writer, but so is success, and you take what you can get, even as you’re transformed by it. I think on a monthly basis of what Nicholson Baker writes of Updike in his tribute U and I:

I compared my awkward public self-promotion too with a documentary about Updike that I saw in 1983, I believe, on public TV, in which, in one scene, as the camera follows his climb up a ladder at his mother’s house to put up or take down some storm windows, in the midst of this tricky physical act, he tosses down to us some startlingly lucid little felicity, something about “These small yearly duties which blah blah blah,” and I was stunned to recognize that in Updike we were dealing with a man so naturally verbal that he could write his fucking memoirs on a ladder!

We’re all on that ladder, including Enjeti, who I’m pleased to note finally scored her book deal—she has an essay collection in the works from the University of Georgia Press. Some are on their way up, some are headed down, and some are stuck for years on the same rung. But you never get anywhere if you don’t try to climb.

The unfinished lives

with 3 comments

Yesterday, the New York Times published a long profile of Donald Knuth, the legendary author of The Art of Computer Programming. Knuth is eighty now, and the article by Siobhan Roberts offers an evocative look at an intellectual giant in twilight:

Dr. Knuth usually dresses like the youthful geek he was when he embarked on this odyssey: long-sleeved T-shirt under a short-sleeved T-shirt, with jeans, at least at this time of year…Dr. Knuth lives in Stanford, and allowed for a Sunday visitor. That he spared an entire day was exceptional—usually his availability is “modulo nap time,” a sacred daily ritual from 1 p.m. to 4 p.m. He started early, at Palo Alto’s First Lutheran Church, where he delivered a Sunday school lesson to a standing-room-only crowd.

This year marks the fiftieth anniversary of the publication of the first volume of Knuth’s most famous work, which is still incomplete. Knuth is busy writing the fourth installment, one fascicle at a time, although its most recent piece has been delayed “because he keeps finding more and more irresistible problems that he wants to present.” As Roberts writes: “Dr. Knuth’s exacting standards, literary and otherwise, may explain why his life’s work is nowhere near done. He has a wager with Sergey Brin, the co-founder of Google and a former student…over whether Mr. Brin will finish his Ph.D. before Dr. Knuth concludes his opus…He figures it will take another twenty-five years to finish The Art of Computer Programming, although that time frame has been a constant since about 1980.”

Knuth is a prominent example, although far from the most famous, of a literary and actuarial phenomenon that has grown increasingly familiar—an older author with a projected work of multiple volumes, published one book at a time, that seems increasingly unlikely to ever see completion. On the fiction side, the most noteworthy case has to be that of George R.R. Martin, who has been fielding anxious inquiries from fans for most of the last decade. (In an article that appeared seven long years ago in The New Yorker, Laura Miller quotes Martin, who was only sixty-three at the time: “I’m still getting e-mail from assholes who call me lazy for not finishing the book sooner. They say, ‘You better not pull a Jordan.’”) Robert A. Caro is still laboring over what he hopes will be the final volume of his biography of Lyndon Johnson, and mortality has become an issue not just for him, but for his longtime editor, as we read in Charles McGrath’s classic profile in the Times:

Robert Gottlieb, who signed up Caro to do The Years of Lyndon Johnson when he was editor in chief of Knopf, has continued to edit all of Caro’s books, even after officially leaving the company. Not long ago he said he told Caro: “Let’s look at this situation actuarially. I’m now eighty, and you are seventy-five. The actuarial odds are that if you take however many more years you’re going to take, I’m not going to be here.”

That was six years ago, and both men are still working hard. But sometimes a writer has no choice but to face the inevitable. When asked about the concluding fifth volume of his life of Picasso, with the fourth one still on the way, the biographer John Richardson said candidly: “Listen, I’m ninety-one—I don’t think I have time for that.”

I don’t have the numbers to back this up, but such cases—or at least the public attention that they inspire—seem to be growing more common these days, on account of some combination of lengthening lifespans, increased media coverage of writers at work, and a greater willingness from publishers to agree to multiple volumes in the first place. The subjects of such extended commitments tend to be monumental in themselves, in order to justify the total investment of the writer’s own lifetime, and expanding ambitions are often to blame for blown deadlines. Martin, Caro, and Knuth all increased the prospective number of volumes after their projects were already underway, or as Roberts puts it: “When Dr. Knuth started out, he intended to write a single work. Soon after, computer science underwent its Big Bang, so he reimagined and recast the project in seven volumes.” And this “recasting” seems particularly common in the world of biographies, as the author discovers more material that he can’t bear to cut. The first few volumes may have been produced with relative ease, but as the years pass and anticipation rises, the length of time it takes to write the next installment grows, until it becomes theoretically infinite. Such a radical change of plans, which can involve extending the writing process for decades, or even beyond the author’s natural lifespan, requires an indulgent publisher, university, or other benefactor. (John Richardson’s book has been underwritten by nothing less than the John Richardson Fund for Picasso Research, which reminds me of what Homer Simpson said after being informed that he suffered from Homer Simpson syndrome: “Oh, why me?”) And it may not be an accident that many of the examples that first come to mind are white men, who have the cultural position and privilege to take their time.

It isn’t hard to understand a writer’s reluctance to let go of a subject, the pressures on a book being written in plain sight, or the tempting prospect of working on the same project forever. And the image of such authors confronting their mortality in the face of an unfinished book is often deeply moving. One of the most touching examples is that of Joseph Needham, whose Science and Civilization in China may have undergone the most dramatic expansion of them all, from an intended single volume to twenty-seven and counting. As Kenneth Girdwood Robinson writes in a concluding posthumous volume:

The Duke of Edinburgh, Chancellor of the University of Cambridge, visited The Needham Research Institute, and interested himself in the progress of the project. “And how long will it take to finish it?” he enquired. On being given a rather conservative answer, “At least ten years,” he exclaimed, “Good God, man, Joseph will be dead before you’ve finished,” a very true appreciation of the situation…In his closing years, though his mind remained lucid and his memory astonishing, Needham had great difficulty even in moving from one chair to another, and even more difficulty in speaking and in making himself understood, due to the effect of the medicines he took to control Parkinsonism. But a secretary, working closely with him day by day, could often understand what he had said, and could read what he had written, when others were baffled.

Needham’s decline eventually became impossible to ignore by those who knew him best, as his biographer Simon Winchester writes in The Man Who Loved China: “It was suggested that, for the first time in memory, he take the day off. It was a Friday, after all: he could make it a long weekend. He could charge his batteries for the week ahead. ‘All right,’ he said. ‘I’ll stay at home.’” He died later that day, with his book still unfinished. But it had been a good life.

Quote of the Day

leave a comment »

Writers are usually embarrassed when other writers start to “sing”—their profession’s prestige is at stake and the blabbermouths are likely to have the whole wretched truth beat out of them, that they are an ignorant, hysterically egotistical, shamelessly toadying, envious lot who would do almost anything in the world—even write a novel—to avoid an honest day’s work or escape a human responsibility. Any writer tempted to open his trap in public lets the news out.

Dawn Powell, in The New Yorker

Written by nevalalee

December 3, 2018 at 7:30 am

The private eyes of culture

leave a comment »

Yesterday, in my post on the late magician Ricky Jay, I neglected to mention one of the most fascinating aspects of his long career. Toward the end of his classic profile in The New Yorker, Mark Singer drops an offhand reference to an intriguing project:

Most afternoons, Jay spends a couple of hours in his office, on Sunset Boulevard, in a building owned by Andrew Solt, a television producer…He decided now to drop by the office, where he had to attend to some business involving a new venture that he has begun with Michael Weber—a consulting company called Deceptive Practices, Ltd., and offering “Arcane Knowledge on a Need to Know Basis.” They are currently working on the new Mike Nichols film, Wolf, starring Jack Nicholson.

When the article was written, Deceptive Practices was just getting off the ground, but it went on to compile an enviable list of projects, including The Illusionist, The Prestige, and most famously Forrest Gump, for which Jay and Weber designed the wheelchair that hid Gary Sinise’s legs. It isn’t clear how lucrative the business ever was, but it made for great publicity, and best of all, it allowed Jay to monetize the service that he had offered for free to the likes of David Mamet—a source of “arcane knowledge,” much of it presumably gleaned from his vast reading in the field, that wasn’t available in any other way.

As I reflected on this, I was reminded of another provider of arcane knowledge who figures prominently in one of my favorite novels. In Umberto Eco’s Foucault’s Pendulum, the narrator, Casaubon, comes home to Milan after a long sojourn abroad feeling like a man without a country. He recalls:

I decided to invent a job for myself. I knew a lot of things, unconnected things, but I wanted to be able to connect them after a few hours at a library. I once thought it was necessary to have a theory, and that my problem was that I didn’t. But nowadays all you needed was information; everybody was greedy for information, especially if it was out of date. I dropped in at the university, to see if I could fit in somewhere. The lecture halls were quiet; the students glided along the corridors like ghosts, lending one another badly made bibliographies. I knew how to make a good bibliography.

In practice, Casaubon finds that he knows a lot of things—like the identities of such obscure figures as Lord Chandos and Anselm of Canterbury—that can’t be found easily in reference books, prompting a student to marvel at him: “In your day you knew everything.” This leads Casaubon to a sudden inspiration: “I had a trade after all. I would set up a cultural investigation agency, be a kind of private eye of learning. Instead of sticking my nose into all-night dives and cathouses, I would skulk around bookshops, libraries, corridors of university departments…I was lucky enough to find two rooms and a little kitchen in an old building in the suburbs…In a pair of bookcases I arranged the atlases, encyclopedias, catalogs I acquired bit by bit.”

This feels a little like the fond daydream of a scholar like Umberto Eco himself, who spent decades acquiring arcane knowledge—not all of it required by his academic work—before becoming a famous novelist. And I suspect that many graduate students, professors, and miscellaneous bibliophiles cherish the hope that the scraps of disconnected information that they’ve accumulated over time will turn out to be useful one day, in the face of all evidence to the contrary. (Casaubon is evidently named after the character from Middlemarch who labors for years over a book titled The Key to All Mythologies, which is already completely out of date.) To illustrate what he does for a living, Casaubon offers the example of a translator who calls him one day out of the blue, desperate to know the meaning of the word “Mutakallimūn.” Casaubon asks him for two days, and then he gets to work:

I go to the library, flip through some card catalogs, give the man in the reference office a cigarette, and pick up a clue. That evening I invite an instructor in Islamic studies out for a drink. I buy him a couple of beers and he drops his guard, gives me the lowdown for nothing. I call the client back. “All right, the Mutakallimūn were radical Moslem theologians at the time of Avicenna. They said the world was a sort of dust cloud of accidents that formed particular shapes only by an instantaneous and temporary act of the divine will. If God was distracted for even a moment, the universe would fall to pieces, into a meaningless anarchy of atoms. That enough for you? The job took me three days. Pay what you think is fair.”

Eco could have picked nearly anything to serve as a case study, of course, but the story that he choses serves as a metaphor for one of the central themes of the book. If the world of information is a “meaningless anarchy of atoms,” it takes the private eyes of culture to give it shape and meaning.

All the while, however, Eco is busy undermining the pretensions of his protagonists, who pay a terrible price for treating information so lightly. And it might not seem that such brokers of arcane knowledge are even necessary these days, now that an online search generates pages of results for the Mutakallimūn. Yet there’s still a place for this kind of scholarship, which might end up being the last form of brainwork not to be made obsolete by technology. As Ricky Jay knew, by specializing deeply in one particular field, you might be able to make yourself indispensable, especially in areas where the knowledge hasn’t been written down or digitized. (In the course of researching Astounding, I was repeatedly struck by how much of the story wasn’t available in any readily accessible form. It was buried in letters, manuscripts, and other primary sources, and while this happens to be the one area where I’ve actually done some of the legwork, I have a feeling that it’s equally true of every other topic imaginable.) As both Jay and Casaubon realized, it’s a role that rests on arcane knowledge of the kind that can only be acquired by reading the books that nobody else has bothered to read in a long time, even if it doesn’t pay off right away. Casaubon tells us: “In the beginning, I had to turn a deaf ear to my conscience and write theses for desperate students. It wasn’t hard; I just went and copied some from the previous decade. But then my friends in publishing began sending me manuscripts and foreign books to read—naturally, the least appealing and for little money.” But he perseveres, and the rule that he sets for himself might still be enough, if you’re lucky, to fuel an entire career:

Still, I was accumulating experience and information, and I never threw anything away…I had a strict rule, which I think secret services follow, too: No piece of information is superior to any other. Power lies in having them all on file and then finding the connections.

Written by nevalalee

November 27, 2018 at 8:41 am

Ghosts and diversions

leave a comment »

Over the weekend, after I heard that the magician Ricky Jay had died, I went back to revisit the great profile, “Secrets of the Magus,” that Mark Singer wrote over a quarter of a century ago for The New Yorker. Along with Daniel Zalewski’s classic piece on Werner Herzog, it’s one of the articles in that magazine that I’ve thought about and reread the most, but what caught my attention this time around was a tribute from David Mamet:

I’ll call Ricky on the phone. I’ll ask him—say, for something I’m writing—“A guy’s wandering through upstate New York in 1802 and he comes to a tavern and there’s some sort of mountebank. What would the mountebank be doing?” And Ricky goes to his library and then sends me an entire description of what the mountebank would be doing. Or I’ll tell him I’m having a Fourth of July party and I want to do some sort of disappearance in the middle of the woods. He says, “That’s the most bizarre request I’ve ever heard. You want to do a disappearing effect in the woods? There’s nothing like that in the literature. I mean, there’s this one 1760 pamphlet—Jokes, Tricks, Ghosts, and Diversions by Woodland, Stream and Campfire. But, other than that, I can’t think of a thing.” He’s unbelievably generous. Ricky’s one of the world’s great people. He’s my hero. I’ve never seen anybody better at what he does.

Coming from Mamet, this is high praise indeed, and it gets at most of the reasons why Ricky Jay was one of my heroes, too. Elsewhere in the article, Mamet says admiringly: “I regard Ricky as an example of the ‘superior man,’ according to the I Ching definition. He’s the paradigm of what a philosopher should be: someone who’s devoted his life to both the study and the practice of his chosen field.”

And what struck me on reading these lines again was how deeply Jay’s life and work were tied up in books. A bookseller quoted in Singer’s article estimates that Jay spent more of his disposable income on rare books than anyone else he knew, and his professional legacy might turn out to be even greater as a writer, archivist, and historian as it was for sleight of hand. (“Though Jay abhors the notion of buying books as investments, his own collection, while it is not for sale and is therefore technically priceless, more or less represents his net worth,” Singer writes. And I imagine that a lot of his fellow collectors are very curious about what will happen to his library now.) His most famous book as an author, Learned Pigs & Fireproof Women, includes a chapter on Arthur Lloyd, “The Human Card Index,” a vaudevillian renowned for his ability to produce anything printed on paper—a marriage license, ringside seats to a boxing match, menus, photos of royalty, membership cards for every club imaginable—from his pockets on demand. This feels now like a metaphor for the mystique of Jay himself, who fascinated me for many of the same reasons. Like most great magicians, he exuded an aura of arcane wisdom, but in his case, this impression appears to have been nothing less than the truth. Singer quotes the magician Michael Weber:

Magic is not about someone else sharing the newest secret. Magic is about working hard to discover a secret and making something out of it. You start with some small principle and you build a theatrical presentation out of it. You do something that’s technically artistic that creates a small drama. There are two ways you can expand your knowledge—through books and by gaining the confidence of fellow magicians who will explain these things. Ricky to a large degree gets his information from books—old books—and then when he performs for magicians they want to know, “Where did that come from?” And he’s appalled that they haven’t read this stuff.

As a result, Jay had the paradoxical image of a man who was immersed in the lore of magic while also keeping much of that world at arm’s length. “Clearly, Jay has been more interested in the craft of magic than in the practical exigencies of promoting himself as a performer,” Singer writes, and Jay was perfectly fine with that reputation. In Learned Pigs, Jay writes admiringly of the conjurer Max Malini:

Yet far more than Malini’s contemporaries, the famous conjurers Herrmann, Kellar, Thurston, and Houdini, Malini was the embodiment of what a magician should be—not a performer who requires a fully equipped stage, elaborate apparatus, elephants, or handcuffs to accomplish his mysteries, but one who can stand a few inches from you and with a borrowed coin, a lemon, a knife, a tumbler, or a pack of cards convince you he performs miracles.

This was obviously how Jay liked to see himself, as he says with equal affection of the magician Dai Vernon: “Making money was only a means of allowing him to sit in a hotel room and think about his art, about cups and balls and coins and cards.” Yet the reality must have been more complicated. You don’t become as famous or beloved as Ricky Jay without an inhuman degree of ambition, however carefully hidden, and he cultivated attention in ways that allowed him to maintain his air of remove. Apart from Vernon, his other essential mentor was Charlie Miller, who seems to have played the same role in the lives of other magicians that Joe Ancis, “the funniest man in New York City,” did for Lenny Bruce. Both were geniuses who hated to perform, so they practiced their art for a small handful of confidants and fellow obsessives. And the fact that Jay, by contrast, lived the kind of life that would lead him to be widely mourned by the public indicates that there was rather more to him than the reticent persona that he projected.

Jay did perform for paying audiences, of course, and Singer’s article closes with his preparations for a show, Ricky Jay and His 52 Assistants, that promises to relieve him from the “tenuous circumstances” that result from his devotion to art. (A decade later, my brother and I went to see his second Broadway production, On the Stem, which is still one of my favorite memories from a lifetime of theatergoing.) But he evidently had mixed feelings about the whole enterprise, which left him even more detached from the performers with whom he was frequently surrounded. As Weber notes: “Ricky won’t perform for magicians at magic shows, because they’re interested in things. They don’t get it. They won’t watch him and be inspired to make magic of their own. They’ll be inspired to do that trick that belongs to Ricky…There’s this large body of magic lumpen who really don’t understand Ricky’s legacy—his contribution to the art, his place in the art, his technical proficiency and creativity. They think he’s an élitist and a snob.” Or as the writer and mentalist T.A. Walters tells Singer:

Some magicians, once they learn how to do a trick without dropping the prop on their foot, go ahead and perform in public. Ricky will work on a routine a couple of years before even showing anyone. One of the things that I love about Ricky is his continued amazement at how little magicians seem to care about the art. Intellectually, Ricky seems to understand this, but emotionally he can’t accept it. He gets as upset about this problem today as he did twenty years ago.

If the remarkable life that he lived is any indication, Jay never did get over it. According to Singer, Jay once asked Dai Vernon how he dealt with the intellectual indifference of other magicians to their craft. Vernon responded: “I forced myself not to care.” And after his friend’s death, Jay said wryly: “Maybe that’s how he lived to be ninety-eight years old.”

The authoritarian personality

leave a comment »

Note: I’m taking a few days off for Thanksgiving. This post originally appeared, in a slightly different form, on August 29, 2017.

In 1950, a group of four scholars working at UC Berkeley published a massive book titled The Authoritarian Personality. Three of its authors, including the philosopher and polymath Theodor W. Adorno, were Jewish, and the study was expressly designed to shed light on the rise of fascism and Nazism, which it conceived in large part as the manifestation of an abnormal personality syndrome magnified by mass communication. The work was immediately controversial, and some of the concerns that have been raised about its methodology—which emphasized individual pathology over social factors—appear to be legitimate. (One of its critics, the psychologist Thomas Pettigrew, conducted a study of American towns in the North and South that cast doubt on whether such traits as racism could truly be seen as mental illnesses: “You almost had to be mentally ill to be tolerant in the South. The authoritarian personality was a good explanation at the individual level, but not at the societal level.” The italics are mine.) Yet the book remains hugely compelling, and we seem to be living in a moment in which its ideas are moving back toward the center of the conversation, with attention from both ends of the political spectrum. Richard Spencer, of all people, wrote his master’s thesis on Adorno and Richard Wagner, while a bizarre conspiracy theory has emerged on the right that Adorno was the secret composer and lyricist for the Beatles. More reasonably, the New Yorker music critic Alex Ross wrote shortly after the last presidential election:

The combination of economic inequality and pop-cultural frivolity is precisely the scenario Adorno and others had in mind: mass distraction masking elite domination. Two years ago, in an essay on the persistence of the Frankfurt School, I wrote, “If Adorno were to look upon the cultural landscape of the twenty-first century, he might take grim satisfaction in seeing his fondest fears realized.” I spoke too soon. His moment of vindication is arriving now.

And when you leaf today through The Authoritarian Personality, which is available in its entirety online, you’re constantly rocked by flashes of recognition. In the chapter “Politics and Economics in the Interview Material,” before delving into the political beliefs expressed by the study’s participants, Adorno writes:

The evaluation of the political statements contained in our interview material has to be considered in relation to the widespread ignorance and confusion of our subjects in political matters, a phenomenon which might well surpass what even a skeptical observer should have anticipated. If people do not know what they are talking about, the concept of “opinion,” which is basic to any approach to ideology, loses its meaning.

Ignorance and confusion are bad enough, but they become particularly dangerous when combined with the social pressure to have an opinion about everything, which encourages people to fake their way through it. As Adorno observes: “Those who do not know but feel somehow obliged to have political opinions, because of some vague idea about the requirements of democracy, help themselves with scurrilous ways of thinking and sometimes with forthright bluff.” And he describes this bluffing and bluster in terms that should strike us as uncomfortably familiar:

The individual has to cope with problems which he actually does not understand, and he has to develop certain techniques of orientation, however crude and fallacious they may be, which help him to find his way through the dark…On the one hand, they provide the individual with a kind of knowledge, or with substitutes for knowledge, which makes it possible for him to take a stand where it is expected of him, whilst he is actually not equipped to do so. On the other hand, by themselves they alleviate psychologically the feeling of anxiety and uncertainty and provide the individual with the illusion of some kind of intellectual security, of something he can stick to even if he feels, underneath, the inadequacy of his opinions.

So what do we do when we’re expected to have opinions on subjects that we can’t be bothered to actually understand? Adorno argues that we tend to fall back on the complementary strategies of stereotyping and personification. Of the former, he writes:

Rigid dichotomies, such as that between “good and bad,” “we and the others,” “I and the world” date back to our earliest developmental phases…They point back to the “chaotic” nature of reality, and its clash with the omnipotence fantasies of earliest infancy. Our stereotypes are both tools and scars: the “bad man” is the stereotype par excellence…Modern mass communications, molded after industrial production, spread a whole system of stereotypes which, while still being fundamentally “ununderstandable” to the individual, allow him at any moment to appear as being up to date and “knowing all about it.” Thus, stereotyped thinking in political matters is almost inescapable.

Adorno was writing nearly seventy years ago, and the pressure to “know all about” politics—as well as the volume of stereotyped information being fed to consumers—has increased exponentially. But stereotypes, while initially satisfying, exist on the level of abstraction, which leads to the need for personalization as well:

[Personalization is] the tendency to describe objective social and economic processes, political programs, internal and external tensions in terms of some person identified with the case in question rather than taking the trouble to perform the impersonal intellectual operations required by the abstractness of the social processes themselves…To know something about a person helps one to seem “informed” without actually going into the matter: it is easier to talk about names than about issues, while at the same time the names are recognized identification marks for all current topics.

Adorno concludes that “spurious personalization is an ideal behavior pattern for the semi­-erudite, a device somewhere in the middle between complete ignorance and that kind of ‘knowledge’ which is being promoted by mass communication and industrialized culture.” This is a tendency, needless to say, that we find on both the left and the right, and it becomes particularly prevalent in periods of maximum confusion:

The opaqueness of the present political and economic situation for the average person provides an ideal opportunity for retrogression to the infantile level of stereotypy and personalization…Stereotypy helps to organize what appears to the ignorant as chaotic: the less he is able to enter into a really cognitive process, the more stubbornly he clings to certain patterns, belief in which saves him the trouble of really going into the matter.

This seems to describe our predicament uncannily well, and I could keep listing the parallels forever. (Adorno has an entire subchapter titled “No Pity for the Poor.”) Whatever else you might think of his methods, there’s no question that he captures our current situation with frightening clarity: “As less and less actually depends on individual spontaneity in our political and social organization, the more people are likely to cling to the idea that the man is everything and to seek a substitute for their own social impotence in the supposed omnipotence of great personalities.” Most prophetically of all, Adorno draws a distinction between genuine conservatives and “pseudoconservatives,” describing the former as “supporting not only capitalism in its liberal, individualistic form but also those tenets of traditional Americanism which are definitely antirepressive and sincerely democratic, as indicated by an unqualified rejection of antiminority prejudices.” And he adds chillingly: “The pseudoconservative is a man who, in the name of upholding traditional American values and institutions and defending them against more or less fictitious dangers, consciously or unconsciously aims at their abolition.”

Written by nevalalee

November 23, 2018 at 9:00 am

Amplifying the dream

leave a comment »

Note: I’m taking a few days off for Thanksgiving. This post originally appeared, in a slightly different form, on August 23, 2017.

In the book Nobody Turn Me Around, Charles Euchner shares a story about Bayard Rustin, a neglected but pivotal figure in the civil rights movement who played a crucial role in the March on Washington in 1963:

Bayard Rustin had insisted on renting the best sound system money could buy. To ensure order at the march, Rustin insisted, people needed to hear the program clearly. He told engineers what he wanted. “Very simple,” he said, pointing at a map. “The Lincoln Memorial is here, the Washington Monument is there. I want one square mile where anyone can hear.” Most big events rented systems for $1,000 or $2,000, but Rustin wanted to spend ten times that. Other members of the march committee were skeptical about the need for a deluxe system. “We cannot maintain order where people cannot hear,” Rustin said. If the Mall was jammed with people baking in the sun, waiting in long lines for portable toilets, anything could happen. Rustin’s job was to control the crowd. “In my view it was a classic resolution of the problem of how can you keep a crowd from becoming something else,” he said. “Transform it into an audience.”

Ultimately, Rustin was able to convince the United Auto Workers and International Ladies’ Garment Workers’ Unions to raise twenty thousand dollars for the sound system. (When he was informed that it ought to be possible to do it for less, he replied: “Not for what I want.”) The company American Amplifier and Television landed the contract, and after the system was sabotaged by persons unknown the night before the march, Walter Fauntroy, who was in charge of operations on the ground, called Attorney General Robert Kennedy with a warning: “We have a serious problem. We have a couple hundred thousand people coming. Do you want a fight here tomorrow after all we’ve done?”

The system was fixed just in time, and its importance on that day is hard to overstate. As Zeynep Tufekci writes in her recent book Twitter and Tear Gas: “Rustin knew that without a focused way to communicate with the massive crowd and to keep things orderly, much could go wrong…The sound system worked without a hitch during the day of the march, playing just the role Rustin had imagined: all the participants could hear exactly what was going on, hear instructions needed to keep things orderly, and feel connected to the whole march.” But its impact on our collective memory of the event may have been even more profound. In an article last year in The New Yorker, which is where I first encountered the story, Nathan Heller notes in a discussion of Tufekci’s work:

Before the march, Martin Luther King, Jr., had delivered variations on his “I Have a Dream” speech twice in public. He had given a longer version to a group of two thousand people in North Carolina. And he had presented a second variation, earlier in the summer, before a vast crowd of a hundred thousand at a march in Detroit. The reason we remember only the Washington, D.C., version, Tufekci argues, has to do with the strategic vision and attentive detail work of people like Rustin. Framed by the Lincoln Memorial, amplified by a fancy sound system, delivered before a thousand-person press bay with good camera sight lines, King’s performance came across as something more than what it had been in Detroit—it was the announcement of a shift in national mood, the fulcrum of a movement’s story line and power. It became, in other words, the rarest of protest performances: the kind through which American history can change.

Heller concludes that successful protest movements hinge on the existence of organized, flexible, practical structures with access to elites. After noting that the sound system was repaired, on Kennedy’s orders, by the Army Corps of Engineers, he observes: “You can’t get much cozier with the Man than that.”

There’s another side to the story, however, which neither Tufekci or Heller mention. In his memoir Behind the Dream, the activist Clarence B. Jones recalls:

The Justice Department and the police had worked hand in hand with the March Committee to design a public address system powerful enough to get the speakers’ voices across the Mall; what march coordinators wouldn’t learn until after the event had ended was that the government had built in a bypass to the system so that they could instantly take over control if they deemed it necessary…Ted [Brown] and Bayard [Rustin] told us that right after the march ended those officers approached them, eager to relieve their consciences and reveal the truth about the sound system. There was a kill switch and an administration official’s thumb had been on it the entire time.

The journalist Gary Younge—whose primary source seems to be Jones—expands on this claim in his book The Speech: “Fearing incitement from the podium, the Justice Department secretly inserted a cutoff switch into the sound system so they could turn off the speakers if an insurgent group hijacked the microphone. In such an eventuality, the plan was to play a recording to Mahalia Jackson singing ‘He’s Got the Whole World in His Hands’ in order to calm down the crowd.” In Pillar of Fire, Taylor Branch identifies the official in question as Jerry Bruno, President Kennedy’s “advance man,” who “positioned himself to cut the power to the public address system if rally speeches proved incendiary.” Regardless of the details, the existence of this cutoff switch speaks to the extent to which Rustin’s sound system was central to the question of who controlled the march and its message. And the people who sabotaged it understood this intuitively. (I should also mention the curious rumor that was shared by Dave Chapelle in a comedy special on Netflix: “I heard when Martin Luther King stood on the steps of the Lincoln Memorial and said he had a dream, he was speaking into a PA system that Bill Cosby paid for.” It’s demonstrably untrue, but it also speaks to the place of the sound system in the stories that we tell about the march.)

But what strikes me the most is the sheer practicality of the ends that Rustin, Fauntroy, and the others on the ground were trying to achieve, as conveyed in their own words: “We cannot maintain order where people cannot hear.” “How can you keep a crowd from becoming something else?” “Do you want a fight here tomorrow after all we’ve done?” They weren’t worried about history, but about making it safely to the end of the day. Rustin had been thinking about this march for two decades, and he spent years actively planning for it, conscious that it presented massive organizational challenges that could only be addressed by careful preparation in advance. He had specifically envisioned that it would conclude at the Lincoln Memorial, with a crowd filling the National Mall, a huge space that imposed enormous logistical problems of its own. The primary purpose of the sound system was to allow a quarter of a million people to assemble and disperse in a peaceful fashion, and its properties were chosen with that end in mind. (As Euchner notes: “To get one square mile of clear sound, you need to spend upwards of twenty thousand dollars.”) A system of unusual power, expense, and complexity was the minimum required to ensure the orderly conclusion of an event on that scale. When the audacity to envision the National Mall as a backdrop was combined with the attention to detail to make it work, the result was an electrically charged platform that would amplify any message, figuratively and literally, which made it both powerful and potentially dangerous. Everyone understood this. The saboteurs did. So did the Justice Department. The march’s organizers were keenly aware of it, which was why potentially controversial speakers—including James Baldwin—were excluded from the program. In the end, it became a stage for King, and at least one lesson is clear. When you aim high, and then devote everything you can to the practical side, the result might be more than you could have dreamed.

Beyond the Whole Earth

with 3 comments

Earlier this week, The New Yorker published a remarkably insightful piece by the memoirist and critic Anna Wiener on Stewart Brand, the founder of the Whole Earth Catalog. Brand, as I’ve noted here many times before, is one of my personal heroes, almost by default—I just wouldn’t be the person I am today without the books and ideas that he inspired me to discover. (The biography of Buckminster Fuller that I plan to spend the next three years writing is the result of a chain of events that started when I stumbled across a copy of the Catalog as a teenager in my local library.) And I’m far from alone. Wiener describes Brand as “a sort of human Venn diagram, celebrated for bridging the hippie counterculture and the nascent personal-computer industry,” and she observes that his work remains a touchstone to many young technologists, who admire “its irreverence toward institutions, its emphasis on autodidacticism, and its sunny view of computers as tools for personal liberation.” Even today, Wiener notes, startup founders reach out to Brand, “perhaps in search of a sense of continuity or simply out of curiosity about the industry’s origins,” which overlooks the real possibility that he might still have more meaningful insights than anybody else. Yet he also receives his share of criticism:

“The Whole Earth Catalog is well and truly obsolete and extinct,” [Brand] said. “There’s this sort of abiding interest in it, or what it was involved in, back in the day…There’s pieces being written on the East Coast about how I’m to blame for everything,” from sexism in the back-to-the-land communes to the monopolies of Google, Amazon, and Apple. “The people who are using my name as a source of good or ill things going on in cyberspace, most of them don’t know me at all.”

Wiener continues with a list of elements in the Catalog that allegedly haven’t aged well: “The pioneer rhetoric, the celebration of individualism, the disdain for government and social institutions, the elision of power structures, the hubris of youth.” She’s got a point. But when I look at that litany of qualities now, they seem less like an ideology than a survival strategy that emerged in an era with frightening similarities to our own. Brand’s vision of the world was shaped by the end of the Johnson administration and by the dawn of Nixon and Kissinger, and many Americans were perfectly right to be skeptical of institutions. His natural optimism obscured the extent to which his ideas were a reaction to the betrayals of Watergate and Vietnam, and when I look around at the world today, his insistence on the importance of individuals and small communities seems more prescient than ever. The ongoing demolition of the legacy of the progressive moment, which seems bound to continue on the judicial level no matter what happens elsewhere, only reveals how fragile it was all along. America’s withdrawal from its positions of leadership on climate change, human rights, and other issues has been so sudden and complete that I don’t think I’ll be able to take the notion of governmental reform seriously ever again. Progress imposed from the top down can always be canceled, rolled back, or reversed as soon as power changes hands. (Speaking of Roe v. Wade, Ruth Bader Ginsburg once observed: “Doctrinal limbs too swiftly shaped, experience teaches, may prove unstable.” She seems to have been right about Roe, even if it took half a century for its weaknesses to become clear, and much the same may hold true of everything that progressives have done through federal legislation.) And if the answer, as incomplete and unsatisfying as it might be, lies in greater engagement on the state and local level, the Catalog remains as useful a blueprint as any that we have.

Yet I think that Wiener’s critique is largely on the mark. The trouble with Brand’s tools, as well as their power, is that they work equally well for everyone, regardless of the underlying motive, and when detached from their original context, they can easily be twisted into a kind of libertarianism that seems callously removed from the lives of the most vulnerable. (As Brand says to Wiener: “Whole Earth Catalog was very libertarian, but that’s because it was about people in their twenties, and everybody then was reading Robert Heinlein and asserting themselves and all that stuff.”) Some of Wiener’s most perceptive comments are directed against the Clock of the Long Now, a project that has fascinated and moved me ever since it was first announced. Wiener is less impressed: “When I first heard about the ten-thousand-year clock, as it is known, it struck me as embodying the contemporary crisis of masculinity.” She points out that the clock’s backers include such problematic figures as Peter Thiel, while the funding comes largely from Jeff Bezos, whose impact on the world has yet to receive a full accounting. And after concluding her interview with Brand, Wiener writes:

As I sat on the couch in my apartment, overheating in the late-afternoon sun, I felt a growing unease that this vision for the future, however soothing, was largely fantasy. For weeks, all I had been able to feel for the future was grief. I pictured woolly mammoths roaming the charred landscape of Northern California and future archeologists discovering the remains of the ten-thousand-year clock in a swamp of nuclear waste. While antagonism between millennials and boomers is a Freudian trope, Brand’s generation will leave behind a frightening, if unintentional, inheritance. My generation, and those after us, are staring down a ravaged environment, eviscerated institutions, and the increasing erosion of democracy. In this context, the long-term view is as seductive as the apolitical, inward turn of the communards from the nineteen-sixties. What a luxury it is to be released from politics––to picture it all panning out.

Her description of this attitude as a “luxury” seems about right, and there’s no question that the Whole Earth Catalog appealed to men and women who had the privilege of reinventing themselves in their twenties, which is a form of freedom that can evolve imperceptibly into complacency and selfishness. I’ve begun to uneasily suspect that the relationship might not just be temporal, but causal. Lamenting that the Catalog failed to save us from our current predicament, which is hard to deny, can feel a little like what David Crosby once said to Rolling Stone:

Somehow Sgt. Pepper’s did not stop the Vietnam War. Somehow it didn’t work. Somebody isn’t listening. I ain’t saying stop trying; I know we’re doing the right thing to live, full on. Get it on and do it good. But the inertia we’re up against, I think everybody’s kind of underestimated it. I would’ve thought Sgt. Pepper’s could’ve stopped the war just by putting too many good vibes in the air for anybody to have a war around.

When I wrote about this quote last year, I noted that a decisive percentage of voters who were old enough to buy Sgt. Pepper on its first release ended up voting for Donald Trump, just as some fans of the Whole Earth Catalog have built companies that have come to dominate our lives in unsettling ways. And I no longer think of this as an aberration, or even as a betrayal of the values expressed by the originals, but as an exposure of the flawed idea of freedom that they represented. (Even the metaphor of the catalog itself, which implies that we can pick and choose the knowledge that we need, seems troubling now.) Writing once of Fuller’s geodesic domes, which were a fixture in the Catalog, Brand ruefully confessed that they were elegant in theory, but in practice, they “were a massive, total failure…Domes leaked, always.” Brand’s vision, which grew out of Fuller’s, remains the most compelling way of life that I know. But it leaked, always.

Fire and Fury

leave a comment »

I’ve been thinking a lot recently about Brian De Palma’s horror movie The Fury, which celebrated its fortieth anniversary earlier this year. More specifically, I’ve been thinking about Pauline Kael’s review, which is one of the pieces included in her enormous collection For Keeps. I’ve read that book endlessly for two decades now, and as a result, The Fury is one of those films from the late seventies—like Philip Kaufman’s Invasion of the Body Snatchers—that endure in my memory mostly as a few paragraphs of Kael’s prose. In particular, I often find myself remembering these lines:

De Palma is the reverse side of the coin from Spielberg. Close Encounters gives us the comedy of hope. The Fury is the comedy of cruelly dashed hope. With Spielberg, what happens is so much better than you dared hope that you have to laugh; with De Palma, it’s so much worse than you feared that you have to laugh.

That sums up how I feel about a lot of things these days, when everything is consistently worse than I could have imagined, although laughter usually feels very far away. (Another line from Kael inadvertently points to the danger of identifying ourselves with our political heroes: “De Palma builds up our identification with the very characters who will be destroyed, or become destroyers, and some people identified so strongly with Carrie that they couldn’t laugh—they felt hurt and betrayed.”) And her description of one pivotal scene, which appears in her review of Dressed to Kill, gets closer than just about anything else to my memories of the last presidential election: “There’s nothing here to match the floating, poetic horror of the slowed-down sequence in which Amy Irving and Carrie Snodgress are running to freedom: it’s as if each of them and each of the other people on the street were in a different time frame, and Carrie Snodgress’s face is full of happiness just as she’s flung over the hood of a car.”

The Fury seems to have been largely forgotten by mainstream audiences, but references to it pop up in works ranging from Looper to Stranger Things, and I suspect that it might be due for a reappraisal. It’s about two teenagers, a boy and a girl, who have never met, but who share a psychic connection. As Kael notes, they’re “superior beings” who might have been prophets or healers in an earlier age, but now they’ve been targeted by our “corrupt government…which seeks to use them for espionage, as secret weapons.” Reading this now, I’m slightly reminded of our current administration’s unapologetic willingness to use vulnerable families and children as political pawns, but that isn’t really the point. What interests me more is how De Palma’s love of violent imagery undercuts the whole moral arc of the movie. I might call this a problem, except that it isn’t—it’s a recurrent feature of his work that resonated uneasily with viewers who were struggling to integrate the specter of institutionalized violence into their everyday lives. (In a later essay, Kael wrote of acquaintances who resisted such movies because of its association with the “guilty mess” of the recently concluded war: “There’s a righteousness in their tone when they say they don’t like violence; I get the feeling that I’m being told that my urging them to see The Fury means that I’ll be responsible if there’s another Vietnam.”) And it’s especially striking in this movie, which for much of its length is supposedly about an attempt to escape this cycle of vengeance. Of the two psychic teens, Robyn, played by Andrew Stevens, eventually succumbs to it, while Gillian, played by Amy Irving, fights it for as long as she can. As Kael explains: “Both Gillian and Robyn have the power to zap people with their minds. Gillian is trying to cling to her sanity—she doesn’t want to hurt anyone. And, knowing that her power is out of her conscious control, she’s terrified of her own secret rages.”

And it’s hard for me to read this passage now without connecting it to the ongoing discussion over women’s anger, in which the word “fury” occurs with surprising frequency. Here’s the journalist Rebecca Traister writing in the New York Times, in an essay adapted from her bestselling book Good and Mad:

Fury was a tool to be marshaled by men like Judge Kavanaugh and Senator Graham, in defense of their own claims to political, legal, public power. Fury was a weapon that had not been made available to the woman who had reason to question those claims…Most of the time, female anger is discouraged, repressed, ignored, swallowed. Or transformed into something more palatable, and less recognizable as fury—something like tears. When women are truly livid, they often weep…This political moment has provoked a period in which more and more women have been in no mood to dress their fury up as anything other than raw and burning rage.

Traister’s article was headlined: “Fury is a Political Weapon. And Women Need to Wield It.” And if you were so inclined, you could take The Fury as an extended metaphor for the issue that Casey Cep raises in her recent roundup of books on the subject in The New Yorker: “A major problem with anger is that some people are allowed to express it while others are not.” In the film, Gillian spends most of the movie resisting her violent urges, while her male psychic twin gives into them, and the climax—which is the only scene that most viewers remember—hinges on her embrace of the rage that Robyn passed to her at the moment of his death.

This brings us to Childress, the villain played by John Cassavetes, whose demise Kael hyperbolically describes as “the greatest finish for any villain ever.” A few paragraphs earlier, Kael writes of this scene:

This is where De Palma shows his evil grin, because we are implicated in this murderousness: we want it, just as we wanted to see the bitchy Chris get hers in Carrie. Cassavetes is an ideal villain (as he was in Rosemary’s Baby)—sullenly indifferent to anything but his own interests. He’s so right for Childress that one regrets that there wasn’t a real writer around to match his gloomy, viscous nastiness.

“Gloomy, viscous nastiness” might ring a bell today, and Childress’s death—Gillian literally blows him up with her mind—feels like the embodiment of our impulses for punishment, revenge, and retribution. It’s stunning how quickly the movie discards Gillian’s entire character arc for the sake of this moment, but what makes the ending truly memorable is what happens next, which is nothing. Childress explodes, and the film just ends, because it has nothing left to show us. That works well enough in a movie, but in real life, we have to face the problem of what Brittney Cooper, whose new book explicitly calls rage a superpower, sums up as “what kind of world we want to see, not just what kind of things we want to get rid of.” In her article in The New Yorker, Cep refers to the philosopher and classicist Martha Nussbaum’s treatment of the Furies themselves, who are transformed at the end of the Oresteia into the Eumenides, “beautiful creatures that serve justice rather than pursue cruelty.” It isn’t clear how this transformation takes place, and De Palma, typically, sidesteps it entirely. But if we can’t imagine anything beyond cathartic vengeance, we’re left with an ending closer to what Kael writes of Dressed to Kill: “The spell isn’t broken and [De Palma] doesn’t fully resolve our fear. He’s saying that even after the horror has been explained, it stays with you—the nightmare never ends.”

Written by nevalalee

October 30, 2018 at 9:24 am

The technical review

leave a comment »

One of my favorite works of science fiction, if we define the term as broadly as possible, is Space Colonies, a collection of articles and interviews edited by Stewart Brand that was published in 1977. The year seems significant in itself. It was a period in which Star Trek and Dune—both of which were obviously part of the main sequence of stories inaugurated by John W. Campbell at Astounding—had moved the genre decisively into the mainstream. After the climax of the moon landing, the space race seemed to be winding down, or settling into a groove without a clear destination, and the public was growing restless. (As Norman Mailer said a few years earlier on the Voyage Beyond Apollo cruise, people were starting to view space with indifference or hostility, rather than as a form of adventure.) It was a time in which the environmental movement, the rise of the computer culture, and the political climate of the San Francisco Bay Area were interacting in ways that can seem hard to remember now. In retrospect, it feels like the perfect time for the emergence of Gerard O’Neill, whose ideas about space colonies received widespread attention in just about the only window that would have allowed them to take hold. During the preparation and editing of Space Colonies, which was followed shortly afterward by O’Neill’s book The High Frontier, another cultural phenomenon was beginning to divert some of those energies along very different lines. And while I can’t say for sure, I suspect that the reception of his work, or at least the way that people talked about it, would have been rather different if it had entered the conversation after Star Wars.

As it turned out, the timing was just right for a wide range of unusually interesting people to earnestly debate the prospect of space colonization. In his introduction to Space Colonies, which consists mostly of material that had previously appeared in CoEvolution Quarterly, Brand notes that “no one else has published the highly intelligent attacks” that O’Neill had inspired, and by far the most interesting parts of the book are the sections devoted to this heated debate. Brand writes:

Something about O’Neill’s dream has cut deep. Nothing we’ve run in The CQ has brought so much response or opinions so fierce and unpredictable and at times ambivalent. It seems to be a paradigmatic question to ask if we should move massively into space. In addressing that we’re addressing our most fundamental conflicting perceptions of ourself, of the planetary civilization we’ve got under way. From the perspective of space colonies everything looks different. Choices we’ve already made have to be made again, because changed context changes content. Artificial vs. Natural, Let vs. Control, Local vs. Centralized, Dream vs. Obey—all are re-jumbled. And space colonies aren’t even really new. That’s part of their force—they’re so damned inherent in what we’ve been about for so long. But the shift seems enormous, and terrifying or inspiring to scale. Hello, stars. Goodbye, earth? Is this the longed-for metamorphosis, our brilliant wings at last, or the most poisonous of panaceas?

And the most striking parts of the book today are the passionate opinions on space colonies, both positive and negative, from some very smart respondents who thought that the idea was worth taking seriously.

Leafing through the book now, I feel a strange kind of double awareness, as names that I associate with the counterculture of the late seventies argue about a future that never happened. It leads off with a great line from Ken Kesey: “A lot of people who want to get into space never got into the earth.” (This echoes one of my favorite observations from Robert Anton Wilson, quoting Brad Steiger: “The lunatic asylums are full of people who naively set out to study the occult before they had any real competence in dealing with the ordinary.”) The great Lewis Mumford dismisses space colonies as “another pathological manifestation of the culture that has spent all of its resources on expanding the nuclear means for exterminating the human race.” But the most resonant critical comment on the whole enterprise comes from the poet Wendell Berry:

What cannot be doubted is that the project is an ideal solution to the moral dilemma of all those in this society who cannot face the necessities of meaningful change. It is superbly attuned to the wishes of the corporation executives, bureaucrats, militarists, political operators, and scientific experts who are the chief beneficiaries of the forces that have produced our crisis. For what is remarkable about Mr. O’Neill’s project is not its novelty or its adventurousness, but its conventionality. If it should be implemented, it will be the rebirth of the idea of Progress with all its old lust for unrestrained expansion, its totalitarian concentrations of energy and wealth, its obliviousness to the concerns of character and community, its exclusive reliance on technical and economic criteria, its disinterest in consequence, its contempt for human value, its compulsive salesmanship.

And another line from Berry has been echoing in my head all morning: “It is only a desperate attempt to revitalize the thug morality of the technological specialist, by which we blandly assume that we must do anything whatever that we can do.”

What interests me the most about his response, which you can read in its entirety here, is that it also works as a criticism of many of the recent proposals to address climate change—which may be the one place in which the grand scientific visions of the late seventies may actually come to pass, if only because we won’t have a choice. Berry continues:

This brings me to the central weakness of Mr. O’Neill’s case: its shallow and gullible morality. Space colonization is seen as a solution to problems that are inherently moral, in that they are implicit in our present definitions of character and community. And yet here is a solution to moral problems that contemplates no moral change and subjects itself to no moral standard. Indeed, the solution is based upon the moral despair of Mr. O’Neill’s assertion that “people do not change.” The only standards of judgment that have been applied to this project are technical and economic. Much is made of the fact that the planners’ studies “continue to survive technical review.” But there is no human abomination that has not, or could not have, survived technical review.

Replace “space colonization” with “geoengineering,” and you have a paragraph that could be published today. (My one modification would be to revise Berry’s description of the morality of the technical specialist, which has subtly evolved into “we can do anything whatever that we must do.”) In a recent article in The New Yorker, Elizabeth Kolbert throws up her hands when it comes to the problem of how to discuss the environment without succumbing to despair. After quoting the scientist Peter Wadhams on the need for “technologies to block sunlight, or change the reflectivity of clouds,” she writes: “Apparently, this is supposed to count as inspirational.” Yet the debate still needs to happen, and Space Colonies is the best model I’ve found for this sort of technical review, which has to involve voices of all kinds. Because it turns out that we were living on a space colony all along.

The slow road to the stars

with 6 comments

In the 1980 edition of The Whole Earth Catalog, which is one of the two or three books that I’d bring with me to a desert island, or to the moon, the editor Stewart Brand devotes three pages toward the beginning to the subject of space colonies. Most of the section is taken up by an essay, “The Sky Starts at Your Feet,” in which Brand relates why he took such an interest in an idea that seemed far removed from the hippie concerns with which his book—fairly or not—had always been associated. And his explanation is a fascinating one:

What got me interested in space colonies a few years ago was a chance remark by a grade school teacher. She said that most of her kids expected to live in space. All their lives they’d been seeing Star Trek and American and Russian space activities and drew the obvious conclusions. Suddenly I felt out of it. A generation that grew up with space, I realized, was going to lead to another generation growing up in space. Where did that leave me?

On the next page, Brand draws an even more explicit connection between space colonization and the rise of science fiction in the mainstream: “Most science fiction readers—there are estimated to be two million avid ones in the U.S.—are between the ages of 12 and 26. The first printing for a set of Star Trek blueprints and space cadet manual was 450,000. A Star Trek convention in Chicago drew 15,000 people, and a second one a few weeks later drew 30,000. They invited NASA officials and jammed their lectures.”

This sense of a growing movement left a huge impression on Brand, whose career as an activist had started with a successful campaign to get NASA to release the first picture of the whole earth taken from space. He concludes: “For these kids there’s been a change in scope. They can hold the oceans of the world comfortably in their minds, like large lakes. Space is the ocean now.” And he clearly understands that his real challenge will be to persuade a slightly older cohort of “liberals and environmentalists”—his own generation—to sign on. In typical fashion, Brand doesn’t stress just the practical side, but the new modes of life and thought that space colonization would require. Here’s my favorite passage:

In deemphasizing the exotic qualities of life in space [Gerard] O’Neill is making a mistake I think. People want to go not because it may be nicer than what they have on earth but because it will be harder. The harshness of space will oblige a life-and-death reliance on each other which is the sort of thing that people romanticize and think about endlessly but seldom get to do. This is where I look for new cultural ideas to emerge. There’s nothing like an impossible task to pare things down to essentials—from which comes originality. You can only start over from basics, and, once there, never quite in the same direction as before.

Brand also argues that the colonization project is “so big and so slow and so engrossing” that it will force the rest of civilization to take everything more deliberately: “If you want to inhabit a moon of Jupiter—that’s a reasonable dream now—one of the skills you must cultivate is patience. It’s not like a TV set or a better job—apparently cajolable from a quick politician. Your access to Jupiter has to be won—at its pace—from a difficult solar system.”

And the seemingly paradoxical notion of slowing down the pace of society is a big part of why Brand was so drawn to O’Neill’s vision of space colonies. Brand had lived through a particularly traumatic period in what the business writer Peter Drucker called “the age of discontinuity,” and he expressed strong reservations about the headlong rush of societal change:

The shocks of this age are the shocks of pace. Change accelerates around us so rapidly that we are strangers to our own pasts and even more to our futures. Gregory Bateson comments, “I think we could have handled the industrial revolution, given five hundred years.” In one hundred years we have assuredly not handled it…I feel serene when I can comfortably encompass two weeks ahead. That’s a pathological condition.

Brand’s misgivings are remarkably similar to what John W. Campbell was writing in Astounding in the late thirties: “The conditions [man] tries to adjust to are going to change, and change so darned fast that he never will actually adjust to a given set of conditions. He’ll have to adjust in a different way: he’ll adjust to an environment of change.” Both Brand and Campbell also believed, in the words of the former, that dealing with this challenge would somehow involve “the move of some of humanity into space.” It would force society as a whole to slow down, in a temporal equivalent of the spatial shift in perspective that environmentalists hoped would emerge from the first photos of the whole earth. Brand speaks of it as a project on the religious scale, and he closes: “Space exploration is grounded firmly on the abyss. Space is so impossible an environment for us soft, moist creatures that even with our vaulting abstractions we will have to move carefully, ponderously into that dazzling vacuum. The stars can’t be rushed. Whew, that’s a relief.”

Four decades later, it seems clear that the movement that Brand envisioned never quite materialized, although it also never really went away. Part of this has to do with the fact that many members of the core audience of The Whole Earth Catalog turned out to be surprisingly hostile to the idea. (Tomorrow, I’ll be taking a look at Space Colonies, a special issue of the magazine CoEvolution Quarterly that captures some of the controversy.) But the argument for space colonization as a means of applying the brakes to the relentless movement of civilization seems worth reviving, simply because it feels so counterintuitive. It certainly doesn’t seem like part of the conversation now. We’ve never gotten rid of the term “space race,” which is more likely to be applied these days to the perceived competition between private companies, as in a recent article in The New Yorker, in which Nicholas Schmidle speaks of Blue Origin, SpaceX, and Virgin Galactic as three startups “racing to build and test manned rockets.” When you privatize space, the language that you use to describe it inevitably changes, along with the philosophical challenges that it evokes. A recent book on the subject is titled The Space Barons: Elon Musk, Jeff Bezos, and the Quest to Colonize the Cosmos, which returns to the colonial terminology that early opponents of O’Neill’s ideas found so repellent. The new space race seems unlikely to generate the broader cultural shift that Brand envisioned, largely because we’ve outsourced it to charismatic billionaires who seem unlikely to take anything slowly. But perhaps even the space barons themselves can sense the problem. In the years since he wrote “The Sky Starts at Your Feet,” Brand has moved on to other causes to express the need for mankind to take a longer view. The most elegant and evocative is the Clock of the Long Now, which is designed to keep time for the next ten thousand years. After years of development, it finally seems to be coming together, with millions of dollars of funding from a billionaire who will house it on land that he owns in Texas. His name is Jeff Bezos.

The chosen ones

with one comment

In his recent New Yorker profile of Mark Zuckerberg, Evan Osnos quotes one of the Facebook founder’s close friends: “I think Mark has always seen himself as a man of history, someone who is destined to be great, and I mean that in the broadest sense of the term.” Zuckerberg feels “a teleological frame of feeling almost chosen,” and in his case, it happened to be correct. Yet this tells us almost nothing abut Zuckerberg himself, because I can safely say that most other undergraduates at Harvard feel the same way. A writer for The Simpsons once claimed that the show had so many presidential jokes—like the one about Grover Cleveland spanking Grandpa “on two non-consecutive occasions”—because most of the writers secretly once thought that they would be president themselves, and he had a point. It’s very hard to do anything interesting in life without the certainty that you’re somehow one of the chosen ones, even if your estimation of yourself turns out to be wildly off the mark. (When I was in my twenties, my favorite point of comparison was Napoleon, while Zuckerberg seems to be more fond of Augustus: “You have all these good and bad and complex figures. I think Augustus is one of the most fascinating. Basically, through a really harsh approach, he established two hundred years of world peace.”) This kind of conviction is necessary for success, although hardly sufficient. The first human beings to walk on Mars may have already been born. Deep down, they know it, and this knowledge will determine their decisions for the rest of their lives. Of course, thousands of others “know” it, too. And just a few of them will turn out to be right.

One of my persistent themes on this blog is how we tend to confuse talent with luck, or, more generally, to underestimate the role that chance plays in success or failure. I never tire of quoting the economist Daniel Kahneman, who in Thinking Fast and Slow shares what he calls his favorite equation:

Success = Talent + Luck
Great Success = A little more talent + A lot of luck

The truth of this statement seems incontestable. Yet we’re all reluctant to acknowledge its power in our own lives, and this tendency only increases as the roles played by luck and privilege assume a greater importance. This week has been bracketed by news stories about two men who embody this attitude at its most extreme. On the one hand, you have Brett Kavanaugh, a Yale legacy student who seems unable to recognize that his drinking and his professional success weren’t mutually exclusive, but closer to the opposite. He occupied a cultural and social stratum that gave him the chance to screw up repeatedly without lasting consequences, and we’re about to learn how far that privilege truly extends. On the other hand, you have yesterday’s New York Times exposé of Donald Trump, who took hundreds of millions of dollars from his father’s real estate empire—often in the form of bailouts for his own failed investments—while constantly describing himself as a self-made billionaire. This is hardly surprising, but it’s still striking to see the extent to which Fred Trump played along with his son’s story. He understood the value of that myth.

This gets at an important point about privilege, no matter which form it takes. We have a way of visualizing these matters in spatial terms—”upper class,” “lower class,” “class pyramid,” “rising,” “falling,” or “stratum” in the sense that I used it above. But true privilege isn’t spatial, but temporal. It unfolds over time, by giving its beneficiaries more opportunities to fail and recover, when those living at the edge might not be able to come back from the slightest misstep. We like to say that a privileged person is someone who was born on third base and thinks he hit a triple, but it’s more like being granted unlimited turns at bat. Kavanaugh provides a vivid reminder, in case we needed one, that a man who fits a certain profile has the freedom to make all kinds of mistakes, the smallest of which would be fatal for someone who didn’t look like he did. And this doesn’t just apply to drunken misbehavior, criminal or otherwise, but even to the legitimate failures that are necessary for the vast majority of us to achieve real success. When you come from the right background, it’s easier to survive for long enough to benefit from the effects of luck, which influences the way that we talk about failure itself. Silicon Valley speaks of “failing faster,” which only makes sense when the price of failure is humiliation or the loss of investment capital, not falling permanently out of the middle class. And as I’ve noted before, Pixar’s creative philosophy, which Andrew Stanton described as a process in which “the films still suck for three out of the four years it takes to make them,” is only practicable for filmmakers who look and sound like their counterparts at the top, which grants them the necessary creative freedom to fail repeatedly—a luxury that women are rarely granted.

This may all come across as unbelievably depressing, but there’s a silver lining, and it took me years to figure it out. The odds of succeeding in any creative field—which includes nearly everything in which the standard career path isn’t clearly marked—are minuscule. Few who try will ever make it, even if they have “a teleological frame of feeling almost chosen.” This isn’t due to a lack of drive or talent, but of time and second chances. When you combine the absence of any straightforward instructions with the crucial role played by luck, you get a process in which repeated failure over a long period is almost inevitable. Those who drop out don’t suffer from weak nerves, but from the fact that they’ve used up all of their extra lives. Privilege allows you to stay in the game for long enough for the odds to turn in your favor, and if you’ve got it, you may as well use it. (An Ivy League education doesn’t guarantee success, but it drastically increases your ability to stick around in the middle class in the meantime.) In its absence, you can find strategies of minimizing risk in small ways while increasing it on the highest levels, which just another word for becoming a bohemian. And the big takeaway here is that since the probability of success is already so low, you may as well do exactly what you want. It can be tempting to tailor your work to the market, reasoning that it will increase your chances ever so slightly, but in reality, the difference is infinitesimal. An objective observer would conclude that you’re not going to make it either way, and even if you do, it will take about the same amount of time to succeed by selling out as it would by staying true to yourself. You should still do everything that you can to make the odds more favorable, but if you’re probably going to fail anyway, you might as well do it on your own terms. And that’s the only choice that matters.

Written by nevalalee

October 3, 2018 at 8:59 am

A better place

with 2 comments

Note: Spoilers follow for the first and second seasons of The Good Place.

When I began watching The Good Place, I thought that I already knew most of its secrets. I had missed the entire first season, and I got interested in it mostly due to a single review by Emily Nussbaum of The New Yorker, which might be my favorite piece so far from one of our most interesting critics. Nussbaum has done more than anyone else in the last decade to elevate television criticism into an art in itself, and this article—with its mixture of the critical, personal, and political—displays all her strengths at their best. Writing of the sitcom’s first season finale, which aired the evening before Trump’s inauguration, Nussbaum says: “Many fans, including me, were looking forward to a bit of escapist counterprogramming, something frothy and full of silly puns, in line with the first nine episodes. Instead, what we got was the rare season finale that could legitimately be described as a game-changer, vaulting the show from a daffy screwball comedy to something darker, much stranger, and uncomfortably appropriate for our apocalyptic era.” Following that grabber of an opening, she continues with a concise summary of the show’s complicated premise:

The first episode is about a selfish American jerk, Eleanor (the elfin charmer Kristen Bell), who dies and goes to Heaven, owing to a bureaucratic error. There she is given a soul mate, Chidi (William Jackson Harper), a Senegal-raised moral philosopher. When Chidi discovers that Eleanor is an interloper, he makes an ethical leap, agreeing to help her become a better person…Overseeing it all was Michael, an adorably flustered angel-architect played by Ted Danson; like Leslie Knope, he was a small-town bureaucrat who adored humanity and was desperate to make his flawed community perfect.

There’s a lot more involved, of course, and we haven’t even mentioned most of the other key players. It’s an intriguing setup for a television show, and it might have been enough to get me to watch it on its own. Yet what really caught my attention was Nussbaum’s next paragraph, which includes the kind of glimpse into a critic’s writing life that you only see when emotions run high: “After watching nine episodes, I wrote a first draft of this column based on the notion that the show, with its air of flexible optimism, its undercurrent of uplift, was a nifty dialectical exploration of the nature of decency, a comedy that combined fart jokes with moral depth. Then I watched the finale. After the credits rolled, I had to have a drink.” She then gives away the whole game, which I’m obviously going to do here as well. You’ve been warned:

In the final episode, we learn that it was no bureaucratic mistake that sent Eleanor to Heaven. In fact, she’s not in Heaven at all. She’s in Hell—which is something that Eleanor realizes, in a flash of insight, as the characters bicker, having been forced as a group to choose two of them to be banished to the Bad Place. Michael is no angel, either. He’s a low-ranking devil, a corporate Hell architect out on his first big assignment, overseeing a prankish experimental torture cul-de-sac. The malicious chuckle that Danson unfurls when Eleanor figures it out is both terrifying and hilarious, like a clap of thunder on a sunny day. “Oh, God!” he growls, dropping the mask. “You ruin everything, you know that?”

That’s a legitimately great twist, and when I suggested to my wife—who didn’t know anything about it—that we check it out on Netflix, it was partially so that I could enjoy her surprise at that moment, like a fan of A Song of Ice and Fire eagerly watching an unsuspecting friend during the Red Wedding.

Yet I was the one who really got fooled. The Good Place became my favorite sitcom since Community, and for almost none of the usual reasons. It’s very funny, of course, but I find that the jokes land about half the time, and it settles for what Nussbaum describes as “silly puns” more often than it probably should. Many episodes are closer to freeform comedy—the kind in which the riffs have less to do with context than with whatever the best pitch happened to be in the writers room—than to the clockwork farce to which it ought to aspire. But its flaws don’t really matter. I haven’t been so involved with the characters on a series like this in years, which allows it to take risks and get away with formal experiments that would destroy a lesser show. After the big revelation in the first season finale, it repeatedly blew up its continuity, with Michael resetting the memories of the others and starting over whenever they figured out his plan, but somehow, it didn’t leave me feeling jerked around. This is partially thanks to how the show cleverly conflates narrative time with viewing time, which is one of the great unsung strengths of the medium. (When the second season finally gets on track, these “versions” of the characters have only known one another for a couple of weeks, but every moment is enriched by our memories of their earlier incarnations. It’s a good trick, but it’s not so different from the realization, for example, that all of the plot twists and relationships of the first two seasons of Twin Peaks unfolded over less than a month.) It also speaks to the talent of the cast, which consistently rises to every challenge. And it does a better job of telling a serialized story than any sitcom that I can remember. Even while I was catching up with it, I managed to parcel it out over time, but I can also imagine binging an entire season at one sitting. That’s mostly due to the fact that the writers are masters of structure, if not always at filling the spaces between act breaks, but it’s also because the stakes are literally infinite.

And the stakes apply to all of us. It’s hard to come away from The Good Place without revisiting some of your assumptions about ethics, the afterlife, and what it means to be a good person. (The inevitable release of The Good Place and Philosophy might actually be worth reading.) I’m more aware of how much I’ve internalized the concept of “moral desert,” or the notion that good behavior will be rewarded, which we should all know by now isn’t true. In its own unpretentious way, the series asks its viewers to contemplate the problem of how to live when there might not be a prize awaiting us at the end. It’s the oldest question imaginable, but it seems particularly urgent these days, and the show’s answers are more optimistic than we have any right to expect. Writing just a few weeks after the inauguration, Nussbaum seems to project some of her own despair onto creator Michael Schur:

While I don’t like to read the minds of showrunners—or, rather, I love to, but it’s presumptuous—I suspect that Schur is in a very bad mood these days. If [Parks and Recreation] was a liberal fantasia, The Good Place is a dystopian mindfork: it’s a comedy about the quest to be moral even when the truth gets bent, bullies thrive, and sadism triumphs…Now that his experiment has crashed, [the character of] Michael plans to erase the ensemble’s memories and reboot. The second season—presuming the show is renewed (my mouth to God’s ear)—will start the same scheme from scratch. Michael will make his afterlife Sims suffer, no matter how many rounds it takes.

Yet in the second season hinges on an unlikely change of heart. Michael comes to care about his charges—he even tries to help them escape to the real Good Place—and his newfound affection doesn’t seem like another mislead. I’m not sure if I believe it, but I’m still grateful. It isn’t a coincidence that Michael shares his name with the show’s creator, and I’d like to think that Schur ended up with a kinder version of the series than he may have initially envisioned. Like Nussbaum, he tore up the first draft and started over. Life is hard enough as it is, and the miracle of The Good Place is that it takes the darkest view imaginable of human nature, and then it gently hints that we might actually be capable of becoming better.

Written by nevalalee

September 27, 2018 at 8:39 am

The sin of sitzfleisch

leave a comment »

Yesterday, I was reading the new profile of Mark Zuckerberg by Evan Osnos in The New Yorker when I came across one of my favorite words. It appears in a section about Zuckerberg’s wife, Priscilla Chan, who describes her husband’s reaction to the recent controversies that have swirled around Facebook:

When I asked Chan about how Zuckerberg had responded at home to the criticism of the past two years, she talked to me about Sitzfleisch, the German term for sitting and working for long periods of time. “He’d actually sit so long that he froze up his muscles and injured his hip,” she said.

Until now, the term sitzfleisch, or literally “buttocks,” was perhaps most widely known in chess, in which it evokes the kind of stoic, patient endurance capable of winning games by making one plodding move after another, but you sometimes see it in other contexts as well. Just two weeks ago, Paul Joyce, a lecturer in German at Portsmouth University, was quoted in an article by the BBC: “It’s got a positive sense, [it] positively connotes a sense of endurance, reliability, not just flitting from one place to another, but it is also starting to be questioned as to whether it matches the experience of the modern world.” Which makes it all the more striking to hear it applied to Zuckerberg, whose life’s work has been the systematic construction of an online culture that makes such virtues seem obsolete.

The concept of sitzfleisch is popular among writers—Elizabeth Gilbert has a nice blog post on the subject—but it also has its detractors. A few months ago, I posted a quote from Twilight of the Idols in which Friedrich Nietzsche comes out strongly against the idea. Here’s the full passage, which appears in a section of short maxims and aphorisms:

On ne peut penser et écrire qu’assis (G. Flaubert). Now I’ve got you, you nihilist! Sitting still [sitzfleisch] is precisely the sin against the holy ghost. Only thoughts which come from walking have any value.

The line attributed to Flaubert, which can be translated as “One can think and write only when sitting down,” appears to come from a biographical sketch by Guy de Maupassant. When you read it in context, you can see why it irritated Nietzsche:

From his early infancy, the two distinctive traits of [Flaubert’s] nature were great ingenuousness and a dislike of physical action. All his life he remained ingenuous and sedentary. He could not see any one walking or moving about near him without becoming exasperated; and he would declare in his sharp voice, sonorous and always a little theatrical, that motion was not philosophical. “One can think and write only when seated,” he would say.

On some level, Nietzsche’s attack on sitzfleisch feels like a reaction against his own inescapable habits—he can hardly have written any of his books without the ability to sit in solitude for long periods of time. I’ve noted elsewhere that the creative life has to be conducted both while seated and while engaging in other activities, and that your course of action at any given moment can be guided by whether or not you happen to be sitting down. And it can be hard to strike the right balance. We have to spend time at a desk in order to write, but we often think better by walking, going outside, and pointedly not checking Facebook. In the recent book Nietzsche and Montaigne, the scholar Robert Miner writes:

Both Montaigne and Nietzsche strongly favor mobility over sedentariness. Montaigne is a “sworn enemy” of “assiduity (assiduité)” who goes “mostly on horseback, where my thoughts range most widely.” Nietzsche too finds that “assiduity (Sitzfleisch) is the sin against the Holy Spirit” but favors walking rather than riding. As Dahlkvist observes, Nietzsche may have been inspired by Beethoven’s habit of walking while composing, which he knew about from his reading of Henri Joly’s Psychologie des grand hommes.

That’s possible, but it also reflects the personal experience of any writer, who is often painfully aware of the contradiction of trying to say something about life while spending most of one’s time alone.

And Nietzsche’s choice of words is also revealing. In describing sitzfleisch as a sin against the Holy Ghost, he might have just been looking for a colorful phrase, or making a pun on a “sin of the flesh,” but I suspect that it went deeper. In Catholic dogma, a sin against the Holy Ghost is specifically one of “certain malice,” in which the sinner acts on purpose, repeatedly, and in full knowledge of his or her crime. Nietzsche, who was familiar with Thomas Aquinas, might have been thinking of what the Summa Theologica has to say on the subject:

Augustine, however…says that blasphemy or the sin against the Holy Ghost, is final impenitence when, namely, a man perseveres in mortal sin until death, and that it is not confined to utterance by word of mouth, but extends to words in thought and deed, not to one word only, but to many…Hence they say that when a man sins through weakness, it is a sin “against the Father”; that when he sins through ignorance, it is a sin “against the Son”; and that when he sins through certain malice, i.e. through the very choosing of evil…it is a sin “against the Holy Ghost.”

Sitzfleisch, in short, is the sin of those who should know better. It’s the special province of philosophers, who know exactly how badly they fall short of ordinary human standards, but who have no choice if they intend to publish “not one word only, but many.” Solitary work is unhealthy, even inhuman, but it can hardly be avoided if you want to write Twilight of the Idols. As Nietzsche notes elsewhere in the same book: “To live alone you must be an animal or a god—says Aristotle. He left out the third case: you must be both—a philosopher.”

The electric dream

with 4 comments

There’s no doubt who got me off originally and that was A.E. van Vogt…The basic thing is, how frightened are you of chaos? And how happy are you with order? Van Vogt influenced me so much because he made me appreciate a mysterious chaotic quality in the universe that is not to be feared.

—Philip K. Dick, in an interview with Vertex

I recently finished reading I Am Alive and You Are Dead, the French author Emmanuel Carrère’s novelistic biography of Philip K. Dick. In an article last year about Carrère’s work, James Wood of The New Yorker called it “fantastically engaging,” noting: “There are no references and very few named sources, yet the material appears to rely on the established record, and is clearly built from the same archival labor that a conventional biographer would perform.” It’s very readable, and it’s one of the few such biographies—along with James Tiptree, Jr. by Julie Phillips and a certain upcoming book—aimed at intelligent audience outside the fan community. Dick’s life also feels relevant now in ways that we might not have anticipated two decades ago, when the book was first published in France. He’s never been as central to me as he has for many other readers, mostly because of the accidents of my reading life, and I’ve only read a handful of his novels and stories. I’m frankly more drawn to his acquaintance and occasional correspondent Robert Anton Wilson, who ventured into some of the same dark places and returned with his sanity more or less intact. (One notable difference between the two is that Wilson was a more prolific experimenter with psychedelic drugs, which Dick, apart from one experience with LSD, appears to have avoided.) But no other writer, with one notable exception that I’ll mention below, has done a better job of forcing us to confront the possibility that our understanding of the world might be fatally flawed. And it’s quite possible that he serves as a better guide to the future than any of the more rational writers who populated the pages of Astounding.

What deserves to be remembered about Dick, though, is that he loved the science fiction of the golden age, and he’s part of an unbroken chain of influence that goes back to the earliest days of the pulps. In I Am Alive and You Are Dead, Carrère writes of Dick as a young boy: “He collected illustrated magazines with titles like Astounding and Amazing and Unknown, and these periodicals, in the guise of serious scientific discussion, introduced him to lost continents, haunted pyramids, ships that vanished mysteriously in the Sargasso Sea.” (Carrère, weirdly, puts a superfluous exclamation point at the end of the titles of all these magazines, which I’ve silently removed in these quotations.) Dick continued to collect pulps throughout his life, keeping the most valuable issues in a fireproof safe at his house in San Rafael, California, which was later blown open in a mysterious burglary. Throughout his career, Dick refers casually to classic stories with an easy familiarity that suggests a deep knowledge of the genre, as in a line from his Exegesis, in which he mentions “that C.L. Moore novelette in Astounding about the two alternative futures hinging on which of two girls the guy marries in the present.” But the most revealing connection lies in plain sight. In a section on Dick’s early efforts in science fiction, Carrère writes:

Stories about little green men and flying saucers…were what he was paid to write, and the most they offered in terms of literary recognition was comparison to someone like A.E. van Vogt, a writer with whom Phil had once been photographed at a science fiction convention. The photo appeared in a fanzine above the caption “The Old and the New.”

Carrère persistently dismisses van Vogt as a writer of “space opera,” which might be technically true, though hardly the whole story. Yet he was also the most convincing precursor that Dick ever had. The World of Null-A may be stylistically cruder than Dick at his best, but it also appeared in Astounding in 1945, and it remains so hallucinatory, weird, and undefinable that I still have trouble believing that it was read by twelve-year-olds. (As Dick once said of it in an interview: “All the parts of that book do not add up; all the ingredients did not make a coherency. Now some people are put off by that. They think it’s sloppy and wrong, but the thing that fascinated me so much was that this resembled reality more than anybody else’s writing inside or outside science fiction.”) Once you see the almost apostolic line of succession from van Vogt to Alfred Bester to Dick, the latter seems less like an anomaly within the genre than like an inextricable part of its fabric. Although he only sold one short story, “Impostor,” to John W. Campbell, Dick continued to submit to him for years, before concluding that it wasn’t the best use of his time. As Eric Leif Davin recounts in Partners in Wonder: “[Dick] said he’d rather write several first-draft stories for one cent a word than spend time revising a single story for Campbell, despite the higher pay.” And Dick recalled in his collection The Minority Report:

Horace Gold at Galaxy liked my writing whereas John W. Campbell, Jr. at Astounding considered my writing not only worthless but as he put it, “Nuts.” By and large I liked reading Galaxy because it had the broadest range of ideas, venturing into the soft sciences such as sociology and psychology, at a time when Campbell (as he once wrote me!) considered psionics a necessary premise for science fiction. Also, Campbell said, the psionic character in the story had to be in charge of what was going on.

As a result, the two men never worked closely together, although Dick had surprising affinities with the editor who believed wholeheartedly in psionics, precognition, and genetic memory, and whose magazine never ceased to play a central role in his inner life. In his biography, Carrère provides an embellished version of a recurring dream that Dick had at the age of twelve, “in which he found himself in a bookstore trying to locate an issue of Astounding that would complete his collection.” As Dick describes it in his autobiographical novel VALIS:

In the dream he again was a child, searching dusty used-book stores for rare old science fiction magazines, in particular Astoundings. In the dream he had looked through countless tattered issues, stacks upon stacks, for the priceless serial entitled “The Empire Never Ended.” If he could find it and read it he would know everything; that had been the burden of the dream.

Years later, the phrase “the empire never ended” became central to Dick’s late conviction that we were all living, without our knowledge, in the Rome of the Acts of the Apostles. But the detail that sticks with me the most is that the magazines in the dream were “in particular Astoundings.” The fan Peter Graham famously said that the real golden age of science fiction was twelve, and Dick reached that age at the end of 1940, at the peak of Campbell’s editorship. The timing was perfect for Astounding to rewire his brain forever. When Dick first had his recurring dream, he would have just finished reading a “priceless serial” that had appeared in the previous four issues of the magazine, and I’d like to think that he spent the rest of his life searching for its inconceivable conclusion. It was van Vogt’s Slan.

My ten creative books #10: A Guide for the Perplexed

with 4 comments

Note: I’m counting down ten books that have influenced the way that I think about the creative process, in order of the publication dates of their first editions. It’s a very personal list that reflects my own tastes and idiosyncrasies, and I’m always looking for new recommendations. You can find the earlier installments here.

As regular readers know, I’m a Werner Herzog fan, but not a completist—I’ve seen maybe five of his features and three or four of his documentaries, which leaves a lot of unexplored territory, and I’m not ashamed to admit that Woyzeck put me to sleep. Yet Herzog himself is endlessly fascinating. Daniel Zalewski’s account of the making of Rescue Dawn is one of my five favorite articles ever to appear in The New Yorker, and if you’re looking for an introduction to his mystique, there’s no better place to start. For a deeper dive, you can turn to A Guide for the Perplexed, an expanded version of a collection of the director’s interviews with Paul Cronin, which was originally published more than a decade ago. As I’ve said here before, I regret the fact that I didn’t pick up the first edition when I had the chance, and I feel that my life would have been subtly different if I had. Not only is it the first book I’d recommend to anyone considering a career in filmmaking, it’s almost the first book I’d recommend to anyone considering a career in anything at all. It’s huge, but every paragraph explodes with insight, and you can open it to any page and find yourself immediately transfixed. Here’s one passage picked at random:

Learn to live with your mistakes. Study the law and scrutinize contracts. Expand your knowledge and understanding of music and literature, old and modern. Keep your eyes open. That roll of unexposed celluloid you have in your hand might be the last in existence, so do something impressive with it. There is never an excuse not to finish a film. Carry bolt cutters everywhere.

Or take Herzog’s description of his relationship with his cinematographer: “Peter Zeitlinger is always trying to sneak ‘beautiful’ shots into our films, and I’m forever preventing it…Things are more problematic when there is a spectacular sunset on the horizon and he scrambles to set up the camera to film it. I immediately turn the tripod 180 degrees in the other direction.”

And this doesn’t even touch on Herzog’s stories, which are inexhaustible. He provides his own point of view on many famous anecdotes, like the time he was shot on camera while being interviewed by the BBC—the bullet was stopped by a catalog in his jacket pocket, and he asked to keep going—or how he discouraged Klaus Kinski from abandoning the production of Aguirre: The Wrath of God. (“I told him I had a rifle…and that he would only make it as far as the next bend in the river before he had eight bullets in his head. The ninth would be for me.”) We see Herzog impersonating a veterinarian at the airport to rescue the monkeys that he needed for Aguirre; forging an impressive document over the signature of the president of Peru to gain access to locations for Fitzcarraldo; stealing his first camera; and shooting oil fires in Kuwait under such unforgiving conditions that the microphone began to melt. Herzog is his own best character, and he admits that he can sometimes become “a clown,” but his example is enough to sustain and nourish the rest of us. In On Directing Film, David Mamet writes:

But listen to the difference between the way people talk about films by Werner Herzog and the way they talk about films by Frank Capra, for example. One of them may or may not understand something or other, but the other understands what it is to tell a story, and he wants to tell a story, which is the nature of dramatic art—to tell a story. That’s all it’s good for.

Herzog, believe it or not, would agree, and he recommends Casablanca and The Treasure of the Sierra Madre as examples of great storytelling. And the way in which Herzog and Capra’s reputations have diverged since Mamet wrote those words, over twenty years ago, is illuminating in itself. A Guide for the Perplexed may turn out to be as full of fabrications as Capra’s own memoirs, but they’re the kind of inventions, like the staged moments in Herzog’s “documentaries,” that get at a deeper truth. As Herzog says of another great dreamer: “The difference between me and Don Quixote is, I deliver.”

The living wage

leave a comment »

Over the last few years, we’ve observed an unexpected resurgence of interest in the idea of a universal basic income. The underlying notion is straightforward enough, as Nathan Heller summarizes it in a recent article in The New Yorker:

A universal basic income, or U.B.I., is a fixed income that every adult—rich or poor, working or idle—automatically receives from government. Unlike today’s means-tested or earned benefits, payments are usually the same size, and arrive without request…In the U.S., its supporters generally propose a figure somewhere around a thousand dollars a month: enough to live on—somewhere in America, at least—but not nearly enough to live on well.

This concept—which Heller characterizes as “a government check to boost good times or to guard against starvation in bad ones”—has been around for a long time. As one possible explanation for its current revival, Heller suggests that it amounts to “a futurist reply to the darker side of technological efficiency” as robots replace existing jobs, with prominent proponents including Elon Musk and Richard Branson. And while the present political climate in America may seem unfavorable toward such proposals, it may not stay that way forever. As Annie Lowery, the author of the new book Give People Money, recently said to Slate: “Now that Donald Trump was elected…people are really ticked off. In the event that there’s another recession, I think that the space for policymaking will expand even more radically, so maybe it is a time for just big ideas.”

These ideas are certainly big, but they aren’t exactly new, and over the last century, they’ve attracted support from some surprising sources. One early advocate was the young Robert A. Heinlein, who became interested in one such scheme while working on the socialist writer Upton Sinclair’s campaign for the governorship of California in 1934. A decade earlier, a British engineer named C.H. Douglas had outlined a plan called Social Credit, which centered on the notion that the government should provide a universal dividend to increase the purchasing power of individuals. As the Heinlein scholar Robert James writes in his afterword to the novel For Us, the Living:

Heinlein’s version of Social Credit argues that banks constantly used the power of the fractional reserve to profit by manufacturing money out of thin air, by “fiat.” Banks were (and are) required by federal law to keep only a fraction of their total loans on reserve at any time; they could thus manipulate the money supply with impunity…If you took away that power from the banks by ending the fractional reserve system, and instead let the government do the exact same thing for the good of the people, you could permanently resolve the disparities between production and consumption. By simply giving people the amount of money necessary to spring over the gap between available production and the power to consume, you could end the boom and bust business cycle permanently, and free people to pursue their own interests.

And many still argue that a universal basic income could be accomplished, at least in part, by fiat currency. As Lowery writes in her book: “Dollars are not something that the United States government can run out of.”

Heinlein addressed these issues at length in For Us, the Living, his first attempt at a novel, which, as I’ve noted elsewhere, miraculously transports a man from the present into the future mostly so that he can be subjected to interminable lectures on monetary theory. Here’s one mercifully short example, which sounds a lot like the version of basic income that you tend to hear today:

Each citizen receives a check for money, or what amounts to the same thing, a credit to each account each month, from the government. He gets this free. The money so received is enough to provide the necessities of life for an adult, or to provide everything that a child needs for its care and development. Everybody gets these checks—man, woman, and child. Nevertheless, practically everyone works pretty regularly and most people have incomes from three or four times to a dozen or more times the income they receive from the government.

Years later, Heinlein reused much of this material in his far superior novel Beyond This Horizon, which also features a man from our time who objects to the new state of affairs: “But the government simply gives away all this new money. That’s rank charity. It’s demoralizing. A man should work for what he gets. But forgetting that aspect for a moment, you can’t run a government that way. A government is just like a business. It can’t be all outgo and no income.” And after he remains unwilling to concede that a government and a business might serve different ends, another character politely suggests that he go see “a corrective semantician.”

At first, it might seem incongruous to hear these views from Heinlein, who later became a libertarian icon, but it isn’t as odd as it looks. For one thing, the basic concept has defenders from across the political spectrum, including the libertarian Charles Murray, who wants to replace the welfare state by giving ten thousand dollars a year directly to the people. And Heinlein’s fundamental priority—the preservation of individual freedom—remained consistent throughout his career, even if the specifics changed dramatically. The system that he proposed in For Us, the Living was meant to free people to do what they wanted with their lives:

Most professional people work regularly because they like to…Some work full time and some part time. Quite a number of people work for several eras and then quit. Some people don’t work at all—not for money at least. They have simple tastes and are content to live on their heritage, philosophers and mathematicians and poets and such. There aren’t many like that however. Most people work at least part of the time.

Twenty years later, Heinlein’s feelings had evolved in response to the Cold War, as he wrote to his brother Rex in 1960: “The central problem of today is no longer individual exploitation but national survival…and I don’t think we will solve it by increasing the minimum wage.” But such a basic income might also serve as a survival tactic in itself. As Heller writes in The New Yorker, depending on one’s point of view, it can either be “a clean, crisp way of replacing gnarled government bureaucracy…[or] a stay against harsh economic pressures now on the horizon.”

Inside the sweatbox

leave a comment »

Yesterday, I watched a remarkable documentary called The Sweatbox, which belongs on the short list of films—along with Hearts of Darkness and the special features for The Lord of the Rings—that I would recommend to anyone who ever thought that it might be fun to work in the movies. It was never officially released, but a copy occasionally surfaces on YouTube, and I strongly suggest watching the version available now before it disappears yet again. For the first thirty minutes or so, it plays like a standard featurette of the sort that you might have found on the second disc of a home video release from two decades ago, which is exactly what it was supposed to be. Its protagonist, improbably, is Sting, who was approached by Disney in the late nineties to compose six songs for a movie titled Kingdom of the Sun. (One of the two directors of the documentary is Sting’s wife, Trudie Styler, a producer whose other credits include Lock, Stock and Two Smoking Barrels and Moon.) The feature was conceived by animator Roger Allers, who was just coming off the enormous success of The Lion King, as a mixture of Peruvian mythology, drama, mysticism, and comedy, with a central plot lifted from The Prince and the Pauper. After two years of production, the work in progress was screened for the first time for studio executives. As always, the atmosphere was tense, but no more than usual, and it inspired the standard amount of black humor from the creative team. As one artist jokes nervously before the screening: “You don’t want them to come in and go, ‘Oh, you know what, we don’t like that idea of the one guy looking like the other guy. Let’s get rid of the basis of the movie.’ This would be a good time for them to tell us.”

Of course, that’s exactly what happened. The top brass at Disney hated the movie, production was halted, and Allers left the project that was ultimately retooled into The Emperor’s New Groove, which reused much of the design work and finished animation while tossing out entire characters—along with most of Sting’s songs—and introducing new ones. It’s a story that has fascinated me ever since I first heard about it, around the time of the movie’s initial release, and I’m excited beyond words that The Sweatbox even exists. (The title of the documentary, which was later edited down to an innocuous special feature for the DVD, refers to the room at the studio in Burbank in which rough work is screened.) And while the events that it depicts are extraordinary, they represent only an extreme case of the customary process at Disney and Pixar, at least if you believe the ways in which that the studio likes to talk about itself. In a profile that ran a while back in The New Yorker, the director Andrew Stanton expressed it in terms that I’ve never forgotten:

“We spent two years with Eve getting shot in her heart battery, and Wall-E giving her his battery, and it never worked. Finally—finally—we realized he should lose his memory instead, and thus his personality…We’re in this weird, hermetically sealed freakazoid place where everybody’s trying their best to do their best—and the films still suck for three out of the four years it takes to make them.

This statement appeared in print six months before the release of Stanton’s live action debut John Carter, which implies that this method is far from infallible. And the drama behind The Emperor’s New Groove was unprecedented even by the studio’s relentless standards. As executive Thomas Schumacher says at one point: “We always say, Oh, this is normal. [But] we’ve never been through this before.”

As it happens, I watched The Sweatbox shortly after reading an autobiographical essay by the artist Cassandra Smolcic about her experiences in the “weird, hermetically sealed freakazoid” environment of Pixar. It’s a long read, but riveting throughout, and it makes it clear that the issues at the studio went far beyond the actions of John Lasseter. And while I could focus on any number of details or anecdotes, I’d like to highlight one section, about the firing of director Brenda Chapman halfway through the production of Brave:

Curious about the downfall of such an accomplished, groundbreaking woman, I began taking the company pulse soon after Brenda’s firing had been announced. To the general population of the studio — many of whom had never worked on Brave because it was not yet in full-steam production — it seemed as though Brenda’s firing was considered justifiable. Rumor had it that she had been indecisive, unconfident and ineffective as a director. But for me and others who worked closely with the second-time director, there was a palpable sense of outrage, disbelief and mourning after Brenda was removed from the film. One artist, who’d been on the Brave story team for years, passionately told me how she didn’t find Brenda to be indecisive at all. Brenda knew exactly what film she was making and was very clear in communicating her vision, the story artist said, and the film she was making was powerful and compelling. “From where I was sitting, the only problem with Brenda and her version of Brave was that it was a story told about a mother and a daughter from a distinctly female lens,” she explained.

Smolcic adds: “During the summer of 2009, I personally worked on Brave while Brenda was still in charge. I likewise never felt that she was uncertain about the kind of film she was making, or how to go about making it.”

There are obvious parallels between what happened to Allers and to Chapman, which might seem to undercut the notion that the latter’s firing had anything to do with the fact that she was a woman. But there are a few other points worth raising. One is that no one seems to have applied the words “indecisive, unconfident, and ineffective” to Allers, who voluntarily left the production after his request to push back the release date was denied. And if The Sweatbox is any indication, the situation of women and other historically underrepresented groups at Disney during this period was just as bad as it was at Pixar—I counted exactly one woman who speaks onscreen, for less than fifteen seconds, and all the other faces that we see are white and male. (After Sting expresses concern about the original ending of The Emperor’s New Groove, in which the rain forest is cut down to build an amusement park, an avuncular Roy Disney confides to the camera: “We’re gonna offend somebody sooner or later. I mean, it’s impossible to do anything in the world these days without offending somebody.” Which betrays a certain nostalgia for a time when no one, apparently, was offended by anything that the studio might do.) One of the major players in the documentary is Thomas Schumacher, the head of Disney Animation, who has since been accused of “explicit sexual language and harassment in the workplace,” according to a report in the Wall Street Journal. In the footage that we see, Schumacher and fellow executive Peter Schneider don’t come off particularly well, which may just be a consequence of the perspective from which the story is told. But it’s equally clear that the mythical process that allows such movies to “suck” for three out of four years is only practicable for filmmakers who look and sound like their counterparts on the other side of the sweatbox, which grants them the necessary creative freedom to try and fail repeatedly—a luxury that women are rarely granted. What happened to Allers on Kingdom of the Sun is still astounding. But it might be even more noteworthy that he survived for as long as he did.

The master of time

leave a comment »

I saw Claude Lanzmann’s Shoah for the first time seven years ago at the Gene Siskel Film Center in Chicago. Those ten hours amounted to one of the most memorable moviegoing experiences of my life, and Lanzmann, who died yesterday, was among the most intriguing figures in film. “We see him in the corners of some of his shots, a tall, lanky man, informally dressed, chain-smoking,” Roger Ebert wrote in his review, and it’s in that role—the dogged investigator of the Holocaust, returning years afterward to the scene of the crime—that he’ll inevitably be remembered. He willed Shoah into existence at a period when no comparable models for such a project existed, and the undertaking was so massive that it took over the rest of his career, much of which was spent organizing material that had been cut, which produced several huge documentaries in itself. And the result goes beyond genre. Writing in The New Yorker, Richard Brody observes that Lanzmann’s film is “a late flowering of his intellectual and cultural milieu—existentialism and the French New Wave,” and he even compares it to Breathless. He also memorably describes the methods that Lanzmann used to interview former Nazis:

The story of the making of Shoah is as exciting as a spy novel…Lanzmann hid [the camera] in a bag with a tiny hole for the lens, and had one of his cameramen point it at an unsuspecting interview subject. He hid a small microphone behind his tie. A van was rigged with video and radio equipment that rendered the stealthy images and sounds on a television set. “What qualms should I have had about misleading Nazis, murderers?” Lanzmann recently told Der Spiegel. “Weren’t the Nazis themselves masters of deception?” He believed that his ruses served the higher good of revealing the truth—and perhaps accomplished symbolic acts of resistance after the fact. As he explained in 1985, “I’m killing them with the camera.”

The result speaks for itself, and it would be overwhelming even if one didn’t know the story of how it was made. (If the world were on fire and I could only save a few reels from the entire history of cinema, one of them would be Lanzmann’s devastating interview of the barber Abraham Bomba.) But it’s worth stressing the contrast between the film’s monumental quality and the subterfuge, tenacity, and cleverness that had to go into making it, which hint at Lanzmann’s secret affinities with someone like Werner Herzog. Brody writes:

The most audacious thing Lanzmann did to complete Shoah was, very simply, to take his time. His initial backers expected him to deliver a two-hour film in eighteen months; his response was to lie—to promise that it would be done as specified, and then to continue working as he saw fit. Lanzmann borrowed money (including from [Simone de] Beauvoir) to keep shooting, and then spent five years obsessively editing his three hundred and fifty hours of footage. He writes that he became the “master of time,” which he considered to be not only an aspect of creative control but also one of aesthetic morality. He sensed that there was just “one right path” to follow, and he set a rule for himself: “I refused to carry on until I had found it, which could take hours or days, on one occasion I am not likely to forget it took three weeks.”

Shoah is like no other movie ever made, but it had to be made just like every other movie, except even more so—which is a fact that all documentarians and aspiring muckrakers should remember. After one interview, Brody writes, “Lanzmann and his assistant were unmasked, attacked, and bloodied by the subject’s son and three young toughs.” Lanzmann spent a month in the hospital and went back to work.

When it finally came out in 1985, the film caused a sensation, but its reach might have been even greater three decades later, if only because the way in which we watch documentaries has changed. Lanzmann rightly conceived it as a theatrical release, but today, it would be more likely to play on television or online. Many of us don’t think twice about watching a nonfiction series that lasts for nine hours—The Vietnam War was nearly double that length—and Shoah would have become a cultural event. Yet there’s also something to be said for the experience of seeing it in a darkened auditorium over the course of a single day. As Ebert put it:

[Lanzmann] uses a…poetic, mosaic approach, moving according to rhythms only he understands among the only three kinds of faces we see in this film: survivors, murderers and bystanders. As their testimony is intercut with the scenes of train tracks, steam engines, abandoned buildings and empty fields, we are left with enough time to think our own thoughts, to meditate, to wonder…After nine hours of Shoah, the Holocaust is no longer a subject, a chapter of history, a phenomenon. It is an environment. It is around us.

That said, I’d encourage viewers to experience it in any form that they can, and there’s no denying that a single marathon session makes unusual demands. At the screening that I attended in Chicago, at least two audience members, after a valiant struggle, had fallen asleep by the end of the movie, which got out after midnight, and as the lights went up, the man in front of me said, “That last segment was too long.” He was probably just tired.

In fact, the final section—on the Warsaw Ghetto Uprising—is essential, and I often think of its central subject, the resistance fighter Simcha Rotem. In May 1943, Rotem attempted a rescue operation to save any survivors who might still be in the ghetto, making his way underground through the sewers, but when he reached the surface, he found no one:

I had to go on through the ghetto. I suddenly heard a woman calling from the ruins. It was darkest night, no lights, you saw nothing. All the houses were in ruins, and I heard only one voice. I thought some evil spell had been cast on me, a woman’s voice talking from the rubble. I circled the ruins. I didn’t look at my watch, but I must have spent half an hour exploring, trying to find the woman whose voice guided me, but unfortunately I didn’t find her.

Rotem, who is still alive today, moved from one bunker to another, shouting his password, and Lanzmann gives him the last words in a film that might seem to resist any ending:

There was still smoke, and that awful smell of charred flesh of people who had surely been burned alive. I continued on my way, going to other bunkers in search of fighting units, but it was the same everywhere…I went from bunker to bunker, and after walking for hours in the ghetto, I went back toward the sewers…I was alone all the time. Except for that woman’s voice, and a man I met as I came out of the sewers, I was alone throughout my tour of the ghetto. I didn’t meet a living soul. At one point I recall feeling a kind of peace, of serenity. I said to myself: “I’m the last Jew. I’ll wait for morning, and for the Germans.”

Written by nevalalee

July 6, 2018 at 8:41 am

%d bloggers like this: