Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Posts Tagged ‘New York Times

The end of flexibility

leave a comment »

A few days ago, I picked up my old paperback copy of Steps to an Ecology of Mind, which collects the major papers of the anthropologist and cyberneticist Gregory Bateson. I’ve been browsing through this dense little volume since I was in my teens, but I’ve never managed to work through it all from beginning to end, and I turned to it recently out of a vague instinct that it was somehow what I needed. (Among other things, I’m hoping to put together a collection of my short stories, and I’m starting to see that many of Bateson’s ideas are relevant to the themes that I’ve explored as a science fiction writer.) I owe my introduction to his work, as with so many other authors, to Stewart Brand of The Whole Earth Catalog, who advised in one edition:

[Bateson] wandered thornily in and out of various disciplines—biology, ethnology, linguistics, epistemology, psychotherapy—and left each of them altered with his passage. Steps to an Ecology of Mind chronicles that journey…In recommending the book I’ve learned to suggest that it be read backwards. Read the broad analyses of mind and ecology at the end of the book and then work back to see where the premises come from.

This always seemed reasonable to me, so when I returned to it last week, I flipped immediately to the final paper, “Ecology and Flexibility in Urban Civilization,” which was first presented in 1970. I must have read it at some point—I’ve quoted from it several times on this blog before—but as I looked over it again, I found that it suddenly seemed remarkably urgent. As I had suspected, it was exactly what I needed to read right now. And its message is far from reassuring.

Bateson’s central point, which seems hard to deny, revolves around the concept of flexibility, or “uncommitted potentiality for change,” which he identifies as a fundamental quality of any healthy civilization. In order to survive, a society has to be able to evolve in response to changing conditions, to the point of rethinking even its most basic values and assumptions. Bateson proposes that any kind of planning for the future include a budget for flexibility itself, which is what enables the system to change in response to pressures that can’t be anticipated in advance. He uses the analogy of an acrobat who moves his arms between different positions of temporary instability in order to remain on the wire, and he notes that a viable civilization organizes itself in ways that allow it to draw on such reserves of flexibility when needed. (One of his prescriptions, incidentally, serves as a powerful argument for diversity as a positive good in its own right: “There shall be diversity in the civilization, not only to accommodate the genetic and experiential diversity of persons, but also to provide the flexibility and ‘preadaptation’ necessary for unpredictable change.”) The trouble is that a system tends to eat up its own flexibility whenever a single variable becomes inflexible, or “uptight,” compared to the rest:

Because the variables are interlinked, to be uptight in respect to one variable commonly means that other variables cannot be changed without pushing the uptight variable. The loss of flexibility spreads throughout the system. In extreme cases, the system will only accept those changes which change the tolerance limits for the uptight variable. For example, an overpopulated society looks for those changes (increased food, new roads, more houses, etc.) which will make the pathological and pathogenic conditions of overpopulation more comfortable. But these ad hoc changes are precisely those which in longer time can lead to more fundamental ecological pathology.

When I consider these lines now, it’s hard for me not to feel deeply unsettled. Writing in the early seventies, Bateson saw overpopulation as the most dangerous source of stress in the global system, and these days, we’re more likely to speak of global warming, resource depletion, and income inequality. Change a few phrases here and there, however, and the situation seems largely the same: “The pathologies of our time may broadly be said to be the accumulated results of this process—the eating up of flexibility in response to stresses of one sort or another…and the refusal to bear with those byproducts of stress…which are the age-old correctives.” Bateson observes, crucially, that the inflexible variables don’t need to be fundamental in themselves—they just need to resist change long enough to become a habit. Once we find it impossible to imagine life without fossil fuels, for example, we become willing to condone all kinds of other disruptions to keep that one hard-programmed variable in place. A civilization naturally tends to expand into any available pocket of flexibility, blowing through the budget that it should have been holding in reserve. The result is a society structured along lines that are manifestly rigid, irrational, indefensible, and seemingly unchangeable. As Bateson puts it grimly:

Civilizations have risen and fallen. A new technology for the exploitation of nature or a new technique for the exploitation of other men permits the rise of a civilization. But each civilization, as it reaches the limits of what can be exploited in that particular way, must eventually fall. The new invention gives elbow room or flexibility, but the using up of that flexibility is death.

And it’s difficult for me to read this today without thinking of all the aspects of our present predicament—political, environmental, social, and economic. Since Bateson sounded his warning half a century ago, we’ve consumed our entire budget of flexibility, largely in response to a single hard-programmed variable that undermined all the other factors that it was meant to sustain. At its best, the free market can be the best imaginable mechanism for ensuring flexibility, by allocating resources more efficiently than any system of central planning ever could. (As one prominent politician recently said to The Atlantic: “I love competition. I want to see every start-up business, everybody who’s got a good idea, have a chance to get in the market and try…Really what excites me about markets is competition. I want to make sure we’ve got a set of rules that lets everybody who’s got a good, competitive idea get in the game.” It was Elizabeth Warren.) When capital is concentrated beyond reason, however, and solely for its own sake, it becomes a weapon that can be used to freeze other cultural variables into place, no matter how much pain it causes. As the anonymous opinion writer indicated in the New York Times last week, it will tolerate a president who demeans the very idea of democracy itself, as long as it gets “effective deregulation, historic tax reform, a more robust military and more,” because it no longer sees any other alternative. And this is where it gets us. For most of my life, I was ready to defend capitalism as the best system available, as long as its worst excesses were kept in check by measures that Bateson dismissively describes as “legally slapping the wrists of encroaching authority.” I know now that these norms were far more fragile than I wanted to acknowledge, and it may be too late to recover. Bateson writes: “Either man is too clever, in which case we are doomed, or he was not clever enough to limit his greed to courses which would not destroy the ongoing total system. I prefer the second hypothesis.” And I do, too. But I no longer really believe it.

The paper of record

leave a comment »

One of my favorite conventions in suspense fiction is the trope known as Authentication by Newspaper. It’s the moment in a movie, novel, or television show—and sometimes even in reality—when the kidnapper sends a picture of the victim holding a copy of a recent paper, with the date and headline clearly visible, as a form of proof of life. (You can also use it with piles of illicit cash, to prove that you’re ready to send payment.) The idea frequently pops up in such movies as Midnight Run and Mission: Impossible 2, and it also inspired a classic headline from The Onion: “Report: Majority Of Newspapers Now Purchased By Kidnappers To Prove Date.” It all depends on the fact that a newspaper is a datable object that is widely available and impossible to fake in advance, which means that it can be used to definitively establish the earliest possible day in which an event could have taken place. And you can also use the paper to verify a past date in subtler ways. A few weeks ago, Motherboard had a fascinating article on a time-stamping service called Surety, which provides the equivalent of a dated seal for digital documents. To make it impossible to change the date on one of these files, every week, for more than twenty years, Surety has generated a public hash value from its internal client database and published it in the classified ad section of the New York Times. As the company notes: “This makes it impossible for anyone—including Surety—to backdate timestamps or validate electronic records that were not exact copies of the original.”

I was reminded of all this yesterday, after the Times posted an anonymous opinion piece titled “I Am Part of the Resistance Inside the Trump Administration.” The essay, which the paper credits to “a senior official,” describes what amounts to a shadow government within the White House devoted to saving the president—and the rest of the country—from his worst impulses. And while the author may prefer to remain nameless, he certainly doesn’t suffer from a lack of humility:

Many of the senior officials in [Trump’s] own administration are working diligently from within to frustrate parts of his agenda and his worst inclinations. I would know. I am one of them…It may be cold comfort in this chaotic era, but Americans should know that there are adults in the room. We fully recognize what is happening. And we are trying to do what’s right even when Donald Trump won’t.

The result, he claims, is “a two-track presidency,” with a group of principled advisors doing their best to counteract Trump’s admiration for autocrats and contempt for international relations: “This isn’t the work of the so-called deep state. It’s the work of the steady state.” He even reveals that there was early discussion among cabinet members of using the Twenty-Fifth Amendment to remove Trump from office, although it was scuttled by concern of precipitating a crisis somehow worse than the one in which we’ve found ourselves.

Not surprisingly, the piece has generated a firestorm of speculation about the author’s identity, both online and in the White House itself, which I won’t bother covering here. What interests me are the writer’s reasons for publishing it in the first place. Over the short term, it can only destabilize an already volatile situation, and everyone involved will suffer for it. This implies that the author has a long game in mind, and it had better be pretty compelling. On Twitter, Nate Silver proposed one popular theory: “It seems like the person’s goal is to get outed and secure a very generous advance on a book deal.” He may be right—although if that’s the case, the plan has quickly gone sideways. Reaction on both sides has been far more critical than positive, with Erik Wemple of the Washington Post perhaps putting it best:

Like most anonymous quotes and tracts, this one is a PR stunt. Mr. Senior Administration Official gets to use the distributive power of the New York Times to recast an entire class of federal appointees. No longer are they enablers of a foolish and capricious president. They are now the country’s most precious and valued patriots. In an appearance on Wednesday afternoon, the president pronounced it all a “gutless” exercise. No argument here.

Or as the political blogger Charles P. Pierce says even more savagely in his response on Esquire: “Just shut up and quit.”

But Wemple’s offhand reference to “the distributive power” of the Times makes me think that the real motive is staring us right in the face. It’s a form of Authentication by Newspaper. Let’s say that you’re a senior official in the Trump administration who knows that time is running out. You’re afraid to openly defy the president, but you also want to benefit—or at least to survive—after the ship goes down. In the aftermath, everyone will be scrambling to position themselves for some kind of future career, even though the events of the last few years have left most of them irrevocably tainted. By the time it falls apart, it will be too late to claim that you were gravely concerned. But the solution is a stroke of genius. You plant an anonymous piece in the Times, like the founders of Surety publishing its hash value in the classified ads, except that your platform is vastly more prominent. And you place it there precisely so that you can point to it in the future. After Trump is no longer a threat, you can reveal yourself, with full corroboration from the paper of record, to show that you had the best interests of the country in mind all along. You were one of the good ones. The datestamp is right there. That’s your endgame, no matter how much pain it causes in the meantime. It’s brilliant. But it may not work. As nearly everyone has realized by now, the fact that a “steady state” of conservatives is working to minimize the damage of a Trump presidency to achieve “effective deregulation, historic tax reform, a more robust military and more” is a scandal in itself. This isn’t proof of life. It’s the opposite.

Written by nevalalee

September 6, 2018 at 8:59 am

Critical thinking

with one comment

When you’re a technology reporter, as my wife was for many years, you quickly find that your subjects have certain expectations about the coverage that you’re supposed to be providing. As Benjamin Wallace wrote a while back in New York magazine:

“A smart young person in the Valley thinks being a reporter is basically being a PR person,” says one tech journalist. “Like, We have news to share, we’d like to come and tell you about it.” Reporters who write favorably about companies receive invitations to things; critics don’t. “They’re very thin-skinned,” says another reporter. “On Wall Street, if you call them a douchebag, they’ve already heard seventeen worse things in the last hour. Here, if you criticize a company, you’re criticizing the spirit of innovation.”

Mike Isaac of the New York Times recently made a similar observation in an interview with Recode: “One of the perceptions [of tech entrepreneurs] is A) Well, the press is slanted against us in some way [and] B) Why aren’t they appreciating how awesome we are? And like all these other things…I think a number of companies, including and especially Uber, get really upset when you don’t recognize the gravitas of their genius and the scope of how much they have changed.” Along the same lines, you also sometimes hear that reporters should be “supporting” local startups—which essentially means any company not based in Silicon Valley or New York—or businesses run by members of traditionally underrepresented groups.

As a result, critical coverage of any kind can be seen as a betrayal. But it isn’t a reporter’s job to “support” anything, whether it’s a city, the interests of particular stakeholders, or the concept of innovation itself—and this applies to much more than just financial journalism. In a perceptive piece for Vox, Alissa Wilkinson notes that similar pressures apply to movie critics. She begins with the example of Ocean’s 8, which Cate Blanchett, one of the film’s stars, complained had been reviewed through a “prism of misunderstanding” by film critics, who are mostly white and male. And Wilkinson responds with what I think is a very important point:

They’re not wrong about the makeup of the pool of critics. And this discussion about the demographic makeup of film critics is laudable and necessary. But the way it’s being framed has less helpful implications: that the people whose opinions really count are those whom the movie is “for.” Not only does that ignore how most movies actually make their money, but it says a lot about Hollywood’s attitude toward criticism, best revealed in Blanchett’s statement. She compared studio’s “support” of a film—which means, essentially, a big marketing budget—with critics’ roles in a film’s success, which she says are a “really big part of the equation.” In that view, critics are mainly useful in how they “support” movies the industry thinks they should like because of the demographic group and audience segment into which they fall.

This has obvious affinities to the attitude that we often see among tech startups, perhaps because they’re operating under similar conditions as Hollywood. They’re both risky, volatile fields that depend largely on perception, which is shaped by coverage by a relatively small pool of influencers. It’s true of books as well. And it’s easy for all of them to fall into the trap of assuming that critics who aren’t being supportive somehow aren’t doing their jobs.

But that isn’t true, either. And it’s important to distinguish between the feelings of creators, who can hardly be expected to be objective, and those of outside players with an interest in an enterprise’s success or failure, which can be emotional as much as financial. There are certain movies or startups that many of us want to succeed because of what they say about an entire industry or culture. Black Panther was one, and it earned a reception that exceeded the hopes of even the most fervent fan. A Wrinkle in Time was another, and it didn’t, although I liked that movie a lot. But it isn’t a critic’s responsibility to support a work of art for such reasons. As Wilkinson writes:

Diversifying that pool [of critics] won’t automatically lead to the results the industry might like. Critics who belong to the same demographic group shouldn’t feel as if they need to move in lockstep with a movie simply because someone like them is represented in it, or because the film’s marketing is aimed at them. Women critics shouldn’t feel as if they need to ‘support’ a film telling a woman’s story, any more than men who want to appear to be feminists should. Black and Latinx and Asian critics shouldn’t be expected to love movies about black and Latinx and Asian people as a matter of course.

Wilkinson concludes: “The best reason to diversify criticism is so that when Hollywood puts out movies for women, or movies for people of color, it doesn’t get lazy.” I agree—and I’d add that a more diverse pool of critics would also discourage Hollywood from being lazy when it makes movies for anyone.

Diversity, in criticism as in anything else, is good for the groups directly affected, but it’s equally good for everybody. Writing of Min Jin Lee’s novel Pachinko, the author Eve L. Ewing recently said on Twitter: “Hire Asian-American writers/Korean-American writers/Korean folks with different diasporic experiences to write about Pachinko, be on panels about it, own reviews of it, host online roundtables…And then hire them to write about other books too!” That last sentence is the key. I want to know what Korean-American writers have to say about Pachinko, but I’d be just as interested in their thoughts on, say, Jonathan Franzen’s Purity. And the first step is acknowledging what critics are actually doing, which isn’t supporting particular works of art, advancing a cause, or providing recommendations. It’s writing reviews. When most critics write anything, they thinking primarily about the response it will get from readers and how it fits into their career as a whole. You may not like it, but it’s pointless to ignore it, or to argue that critics should be held to a standard that differs from anyone else trying to produce decent work. (I suppose that one requirement might be a basic respect or affection for the medium that one is criticizing, but that isn’t true of every critic, either.) Turning to the question of diversity, you find that expanding the range of critical voices is worthwhile in itself, just as it is for any other art form, and regardless of its impact on other works. When a piece of criticism or journalism is judged for its effects beyond its own boundaries, we’re edging closer to propaganda. Making this distinction is harder than it looks, as we’ve recently seen with Elon Musk, who, like Trump, seems to think that negative coverage must be the result of deliberate bias or dishonesty. Even on a more modest level, a call for “support” may seem harmless, but it can easily turn into a belief that you’re either with us or against us. And that would be a critical mistake.

The purity test

with one comment

Earlier this week, The New York Times Magazine published a profile by Taffy Brodesser-Akner of the novelist Jonathan Franzen. It’s full of fascinating moments, including a remarkable one that seems to have happened entirely by accident—the reporter was in the room when Frazen received a pair of phone calls, including one from Daniel Craig, to inform him that production had halted on the television adaptation of his novel Purity. Brodesser-Akner writes: “Franzen sat down and blinked a few times.” That sounds about right to me. And the paragraph that follows gets at something crucial about the writing life, in which the necessity of solitary work clashes with the pressure to put its fruits at the mercy of the market:

He should have known. He should have known that the bigger the production—the more people you involve, the more hands the thing goes through—the more likely that it will never see the light of day resembling the thing you set out to make in the first place. That’s the real problem with adaptation, even once you decide you’re all in. It just involves too many people. When he writes a book, he makes sure it’s intact from his original vision of it. He sends it to his editor, and he either makes the changes that are suggested or he doesn’t. The thing that we then see on shelves is exactly the thing he set out to make. That might be the only way to do this. Yes, writing a novel—you alone in a room with your own thoughts—might be the only way to get a maximal kind of satisfaction from your creative efforts. All the other ways can break your heart.

To be fair, Franzen’s status is an unusual one, and even successful novelists aren’t always in the position of taking for granted the publication of “exactly the thing he set out to make.” (In practice, it’s close to all or nothing. In my experience, the novel that you see on store shelves mostly reflects what the writer wanted, while the ones in which the vision clashes with those of other stakeholders in the process generally doesn’t get published at all.) And I don’t think I’m alone when I say that some of the most interesting details that Brodesser-Akner provides are financial. A certain decorum still surrounds the reporting of sales figures in the literary world, so there’s a certain frisson in seeing them laid out like this:

And, well, sales of his novels have decreased since The Corrections was published in 2001. That book, about a Midwestern family enduring personal crises, has sold 1.6 million copies to date. Freedom, which was called a “masterpiece” in the first paragraph of its New York Times review, has sold 1.15 million since it was published in 2010. And 2015’s Purity, his novel about a young woman’s search for her father and the story of that father and the people he knew, has sold only 255,476.

For most writers, selling a quarter of a million copies of any book would exceed their wildest dreams. Having written one of the greatest outliers of the last twenty years, Franzen simply reverting to a very exalted mean. But there’s still a lot to unpack here.

For one thing, while Purity was a commercial disappointment, it doesn’t seem to have been an unambiguous disaster. According to Publisher’s Weekly, its first printing—which is where you can see a publisher calibrating its expectations—came to around 350,000 copies, which wasn’t even the largest print run for that month. (That honor went to David Lagercrantz’s The Girl in the Spider’s Web, which had half a million copies, while a new novel by the likes of John Grisham can run to over a million.) I don’t know what Franzen was paid in advance, but the loss must have fallen well short of a book like Tom Wolfe’s Back to Blood, for which he received $7 million and sold 62,000 copies, meaning that his publisher paid over a hundred dollars for every copy that someone actually bought. And any financial hit would have been modest compared to the prestige of keeping a major novelist on one’s list, which is unquantifiable, but no less real. If there’s one thing that I’ve learned about publishing over the last decade, it’s that it’s a lot like the movie industry, in which apparently inexplicable commercial and marketing decisions are easier to understand when you consider their true audience. In many cases, when they buy or pass on a book, editors aren’t making decisions for readers, but for other editors, and they’re very conscious of what everyone in their imprint thinks. A readership is an abstraction, except when quantified in sales, but editors have their everyday judgement calls reflected back on them by the people they see every day. Giving up a writer like Franzen might make financial sense, but it would be devastating to Farrar, Straus and Giroux, to say nothing of the relationship that can grow between an editor and a prized author over time.

You find much the same dynamic in Hollywood, in which some decisions are utterly inexplicable until you see them as a manifestation of office politics. In theory, a film is made for moviegoers, but the reactions of the producer down the hall are far more concrete. The difference between publishing and the movies is that the latter publish their box office returns, often in real time, while book sales remain opaque even at the highest level. And it’s interesting to wonder how both industries might differ if their approaches were more similar. After years of work, the success of a movie can be determined by the Saturday morning after its release, while a book usually has a little more time. (The exception is when a highly anticipated title doesn’t make it onto the New York Times bestseller list, or falls off it with alarming speed. The list doesn’t disclose any sales figures, which means that success is relative, not absolute—and which may be a small part of the reason why writers seldom wish one another well.) In the absence of hard sales, writers establish the pecking order with awards, reviews, and the other signifiers that have allowed Franzen to assume what Brodesser-Akner calls the mantle of “the White Male Great American Literary Novelist.” But the real takeaway is how narrow a slice of the world this reflects. Even if we place the most generous interpretation imaginable onto Franzen’s numbers, it’s likely that well under one percent of the American population has bought or read any of his books. You’ll find roughly the same number on any given weeknight playing HQ Trivia. If we acknowledged this more widely, it might free writers to return to their proper cultural position, in which the difference between a bestseller and a disappointment fades rightly into irrelevance. Who knows? They might even be happier.

Written by nevalalee

June 28, 2018 at 7:49 am

The Big One

leave a comment »

In a heartfelt appreciation of the novelist Philip Roth, who died earlier this week, the New York Times critic Dwight Garner describes him as “the last front-rank survivor of a generation of fecund and authoritative and, yes, white and male novelists…[that] included John Updike, Norman Mailer and Saul Bellow.” These four names seem fated to be linked together for as long as any of them is still read and remembered, and they’ve played varying roles in my own life. I was drawn first to Mailer, who for much of my adolescence was my ideal of what a writer should be, less because of his actual fiction than thanks to my repeated readings of the juiciest parts of Peter Manso’s oral biography. (If you squint hard and think generously, you can even see Mailer’s influence in the way I’ve tried to move between fiction and nonfiction, although in both cases it was more a question of survival.) Updike, my favorite, was a writer I discovered after college. I agree with Garner that he probably had the most “sheer talent” of them all, and he represents my current model, much more than Mailer, of an author who could apparently do anything. Bellow has circled in and out of my awareness over the years, and it’s only recently that I’ve started to figure out what he means to me, in part because of his ambiguous status as a subject of biography. And Roth was the one I knew least. I’d read Portnoy’s Complaint and one or two of the Zuckerman novels, but I always felt guilty over having never gotten around to such late masterpieces as American Pastoral—although the one that I should probably check out first these days is The Plot Against America.

Yet I’ve been thinking about Roth for about as long as I’ve wanted to be a writer, largely because he came as close as anyone ever could to having the perfect career, apart from the lack of the Nobel Prize. He won the National Book Award for his debut at the age of twenty-six; he had a huge bestseller at an age when he was properly equipped to enjoy it; and he closed out his oeuvre with a run of major novels that critics seemed to agree were among the best that he, or anyone, had ever written. (As Garner nicely puts it: “He turned on the afterburners.”) But he never seemed satisfied by his achievement, which you can take as an artist’s proper stance toward his work, a reflection of the fleeting nature of such rewards, a commentary on the inherent bitterness of the writer’s life, or all of the above. Toward the end of his career, Roth actively advised young writers not to become novelists, and in his retirement announcement, which he delivered almost casually to a French magazine, he quoted Joe Louis: “I did the best I could with what I had.” A month later, in an interview with Charles McGrath of the New York Times, he expanded on his reasoning:

I know I’m not going to write as well as I used to. I no longer have the stamina to endure the frustration. Writing is frustration—it’s daily frustration, not to mention humiliation. It’s just like baseball: you fail two-thirds of the time…I can’t face any more days when I write five pages and throw them away. I can’t do that anymore…I knew I wasn’t going to get another good idea, or if I did, I’d have to slave over it.

And on his computer, he posted a note that gave him strength when he looked at it each day: “The struggle with writing is over.”

Roth’s readers, of course, rarely expressed the same disillusionment, and he lives most vividly in my mind as a reference point against which other authors could measure themselves. In an interview with The Telegraph, John Updike made one of the most quietly revealing statements that I’ve ever heard from a writer, when asked if he felt that he and Roth were in competition:

Yes, I can’t help but feel it somewhat. Especially since Philip really has the upper hand in the rivalry as far as I can tell. I think in a list of admirable novelists there was a time when I might have been near the top, just tucked under Bellow. But since Bellow died I think Philip has…he’s certainly written more novels than I have, and seems more dedicated in a way to the act of writing as a means of really reshaping the world to your liking. But he’s been very good to have around as far as goading me to become a better writer.

I think about that “list of admirable novelists” all the time, and it wasn’t just a joke. In an excellent profile in The New Yorker, Claudia Roth Pierpoint memorably sketched in all the ways in which other writers warily circled Roth. When asked if the two of them were friends, Updike said, “Guardedly,” and Bellow seems to have initially held Roth at arm’s length, until his wife convinced him to give the younger writer a chance. Pierpont concludes of the relationship between Roth and Updike: “They were mutual admirers, wary competitors who were thrilled to have each other in the world to up their game: Picasso and Matisse.”

And they also remind me of another circle of writers whom I know somewhat better. If Bellow, Mailer, Updike, and Roth were the Big Four of the literary world, they naturally call to mind the Big Three of science fiction—Heinlein, Asimov, and Clarke. In each case, the group’s members were perfectly aware of how exceptional they were, and they carefully guarded their position. (Once, in a conference call with the other two authors, Asimov jokingly suggested that one of them should die to make room for their successors. Heinlein responded: “Fuck the other writers!”) Clarke and Asimov seem to have been genuinely “thrilled to have each other in the world,” but their relationship with the third point of the triangle was more fraught. Toward the end, Asimov started to “avoid” the combative Heinlein, who had a confrontation with Clarke over the Strategic Defense Initiative that effectively ended their friendship. In public, they remained cordial, but you can get a hint of their true feelings in a remarkable passage from the memoir I. Asimov:

[Clarke] and I are now widely known as the Big Two of science fiction. Until early 1988, as I’ve said, people spoke of the Big Three, but then Arthur fashioned a little human figurine of wax and with a long pin— At least, he has told me this. Perhaps he’s trying to warn me. I have made it quite plain to him, however, that if he were to find himself the Big One, he would be very lonely. At the thought of that, he was affected to the point of tears, so I think I’m safe.

As it turned out, Clarke, like Roth, outlived all the rest, and perhaps they felt lonely in the end. Longevity can amount to a kind of victory in itself. But it must be hard to be the Big One.

From Montgomery to Bilbao

leave a comment »

On August 16, 2016, the Equal Justice Initiative, a legal rights organization, unveiled its plans for the National Memorial for Peace and Justice, which would be constructed in Montgomery, Alabama. Today, less than two years later, it opens to the public, and the timing could hardly seem more appropriate, in ways that even those who conceived of it might never have imagined. As Campbell Robertson writes for the New York Times:

At the center is a grim cloister, a walkway with eight hundred weathered steel columns, all hanging from a roof. Etched on each column is the name of an American county and the people who were lynched there, most listed by name, many simply as “unknown.” The columns meet you first at eye level, like the headstones that lynching victims were rarely given. But as you walk, the floor steadily descends; by the end, the columns are all dangling above, leaving you in the position of the callous spectators in old photographs of public lynchings.

And the design represents a breakthrough in more ways than one. As the critic Philip Kennicott points out in the Washington Post: “Even more remarkable, this memorial…was built on a budget of only $15 million, in an age when major national memorials tend to cost $100 million and up.”

Of course, if the memorial had been more costly, it might not exist at all, and certainly not with the level of independence and the clear point of view that it expresses. Yet if there’s one striking thing about the coverage of the project, it’s the absence of the name of any one architect or designer. Neither of these two words even appears in the Times article, and in the Post, we only read that the memorial was “designed by [Equal Justice Initiative founder Bryan] Stevenson and his colleagues at EJI in collaboration with the Boston-based MASS Design Group.” When you go to the latter’s official website, twelve people are credited as members of the project design team. This is markedly different from the way in which we tend to talk about monuments, museums, and other architectural works that are meant to invite our attention. In many cases, the architect’s identity is a selling point in itself, as it invariably is with Frank Gehry, whose involvement in a project like the Guggenheim Museum Bilbao is consciously intended to rejuvenate an entire city. In Montgomery, by contrast, the designer is essentially anonymous, or part of a collaboration, which seems like an aesthetic choice as conscious as the design of the space itself. The individual personality of the architect departs, leaving the names and events to testify on their own behalf. Which is exactly as it should be.

And it’s hard not to compare this to the response to the design of the Vietnam Veterans Memorial in 1981. The otherwise excellent documentary by Ken Burns and Lynn Novick alludes to the firestorm that it caused, but it declines to explore how much of the opposition was personal in nature. As James Reston, Jr. writes in the definitive study A Rift in the Earth:

After Maya Lin’s design was chosen and announced, the public reaction was intense. Letters from outraged veterans poured into the Memorial Fund office. One claimed that Lin’s design had “the warmth and charm of an Abyssinian dagger.” “Nihilistic aesthetes” had chosen it…Predictably, the names of incendiary antiwar icons, Jane Fonda and Abbie Hoffman, were invoked as cheering for a design that made a mockery of the Vietnam dead…As for the winner with Chinese ancestry, [donor H. Ross] Perot began referring to her as “egg roll.”

If anything, the subject matter of the National Memorial for Peace and Justice is even more fraught, and the decision to place the designers in the background seems partially intended to focus the conversation on the museum itself, and not on those who made it.

Yet there’s a deeper lesson here about architecture and its creators. At first, you might think that a building with a singular message would need to arise from—or be identified with—an equally strong personality, but if anything, the trend in recent years has gone the other way. As Reinier de Graaf notes in Four Walls and a Roof, one of the more curious developments over the last few decades is the way in which celebrity architects, like Frank Gehry, have given up much of their own autonomy for the sake of unusual forms that no human hand or brain could properly design:

In partially delegating the production of form to the computer, the antibox has seemingly boosted the production of extravagant shapes beyond any apparent limits. What started as a deliberate meditation on the notion of form in the early antibodies has turned into a game of chance. Authorship has become relative: with creation now delegated to algorithms, the antibox’s main delight is the surprise it causes to the designers.

Its opposite number is the National Memorial for Peace and Justice, which was built with simple materials and techniques that rely for their impact entirely on the insight, empathy, and ingenuity of the designer, who then quietly fades away. The architect can afford to disappear, because the work speaks for those who are unable to speak for themselves. And that might be the most powerful message of all.

Who Needs the Kwik-E-Mart?

leave a comment »

Who needs the Kwik-E-Mart?
Now here’s the tricky part…

“Homer and Apu”

On October 8, 1995, The Simpsons aired the episode “Bart Sells His Soul,” which still hasn’t stopped rattling around in my brain. (A few days ago, my daughter asked: “Daddy, what’s the soul?” I may have responded with some variation on Lisa’s words: “Whether or not the soul is physically real, it’s the symbol of everything fine inside us.” On a more typical morning, though, I’m likely to mutter to myself: “Remember Alf? He’s back—in pog form!”) It’s one of the show’s finest installments, but it came close to being about something else entirely. On the commentary track for the episode, the producer Bill Oakley recalls:

There’s a few long-lived ideas that never made it. One of which is David Cohen’s “Homer the Narcoleptic,” which we’ve mentioned on other tracks. The other one was [Greg Daniels’s] one about racism in Springfield. Do you remember this? Something about Homer and Dr. Hibbert? Well, you pitched it several times and I think we were just…It was some exploration of the concept of race in Springfield, and we just said, you know, we don’t think this is the forum. The Simpsons can’t be the right forum to deal with racism.

Daniels—who went on to create Parks and Recreation and the American version of The Office—went with the pitch for “Bart Sells His Soul” instead, and the other premise evidently disappeared forever, including from his own memory. When Oakley brings it up, Daniels only asks: “What was it?”

Two decades later, The Simpsons has yet to deal with race in any satisfying way, even when the issue seems unavoidable. Last year, the comedian Hari Kondabolu released the documentary The Problem With Apu, which explores the complicated legacy of one of the show’s most prominent supporting characters. On Sunday, the show finally saw fit to respond to these concerns directly, and the results weren’t what anyone—apart perhaps from longtime showrunner Al Jean—might have wanted. As Sopan Deb of the New York Times describes it:

The episode, titled “No Good Read Goes Unpunished,” featured a scene with Marge Simpson sitting in bed with her daughter Lisa, reading a book called “The Princess in the Garden,” and attempting to make it inoffensive for 2018. At one point, Lisa turns to directly address the TV audience and says, “Something that started decades ago and was applauded and inoffensive is now politically incorrect. What can you do?” The shot then pans to a framed picture of Apu at the bedside with the line, “Don’t have a cow!” inscribed on it. Marge responds: “Some things will be dealt with at a later date.” Followed by Lisa saying, “If at all.”

Kondabolu responded on Twitter: “This is sad.” And it was. As Linda Holmes of NPR aptly notes: “Apu is not appearing in a fifty-year-old book by a now-dead author. Apu is a going concern. Someone draws him, over and over again.” And the fact the show decided to put these words into the mouth of Lisa Simpson, whose importance to viewers everywhere was recently underlined, makes it doubly disappointing.

But there’s one obvious change that The Simpsons could make, and while it wouldn’t be perfect, it would be a step in the right direction. If the role of Apu were recast with an actor of South Asian descent, it might not be enough in itself, but I honestly can’t see a downside. Hank Azaria would still be allowed to voice dozens of characters. Even if Apu sounded slightly different than before, this wouldn’t be unprecedented—Homer’s voice changed dramatically after the first season, and Julie Kavner’s work as Marge is noticeably more gravelly than it used to be. Most viewers who are still watching probably wouldn’t even notice, and the purists who might object undoubtedly left a long time ago. It would allow the show to feel newsworthy again, and not just on account of another gimmick. And even if we take this argument to its logical conclusion and ask that Carl, Officer Lou, Akira, Bumblebee Man, and all the rest be voiced by actors of the appropriate background, well, why not? (The show’s other most prominent minority character, Dr. Hibbert, seems to be on his way out for other reasons, and he evidently hasn’t appeared in almost two years.) For a series that has systematically undermined its own legacy in every conceivable way out of little more than boredom, it seems shortsighted to cling to the idea that Azaria is the only possible Apu. And even if it leaves many issues unresolved on the writing level, it also seems like a necessary precondition for change. At this late date, there isn’t much left to lose.

Of course, if The Simpsons were serious about this kind of effort, we wouldn’t be talking about its most recent episode at all. And the discussion is rightly complicated by the fact that Apu—like everything else from the show’s golden age—was swept up in the greatness of those five or six incomparable seasons. Before that unsuccessful pitch on race in Springfield, Greg Daniels was credited for “Homer and Apu,” which deserves to be ranked among the show’s twenty best episodes, and the week after “Bart Sells His Soul,” we got “Lisa the Vegetarian,” which gave Apu perhaps his finest moment, as he ushered Lisa to the rooftop garden to meet Paul and Linda McCartney. But the fact that Apu was a compelling character shouldn’t argue against further change, but in its favor. And what saddens me the most about the show’s response is that it undermines what The Simpsons, at its best, was supposed to be. It was the cartoon that dared to be richer and more complex than any other series on the air; it had the smartest writers in the world and a network that would leave them alone; it was just plain right about everything; and it gave us a metaphorical language for every conceivable situation. The Simpsons wasn’t just a sitcom, but a vocabulary, and it taught me how to think—or it shaped the way that I do think so deeply that there’s no real distinction to be made. As a work of art, it has quietly fallen short in ways both small and large for over fifteen years, but I was able to overlook it because I was no longer paying attention. It had done what it had to do, and I would be forever grateful. But this week, when the show was given the chance to rise again to everything that was fine inside of it, it faltered. Which only tells me that it lost its soul a long time ago.

%d bloggers like this: