Posts Tagged ‘Washington Post’
Wounded Knee and the Achilles heel
On February 27, 1973, two hundred Native American activists occupied the town of Wounded Knee in South Dakota. They were protesting against the unpopular tribal president of the Oglala Lakota Sioux, along with the federal government’s failure to negotiate treaties, and the ensuing standoff—which resulted in two deaths, a serious casualty, and a disappearance—lasted for over seventy days. It also galvanized many of those who watched it unfold, including the author Paul Chaat Smith, who writes in his excellent book Everything You Know About Indians is Wrong:
Lots occurred over the next two and a half months, including a curious incident in which some of the hungry, blockaded Indians attempted to slaughter a cow. Reporters and photographers gathered to watch. Nothing happened. None of the Indians—some urban activists, some from Sioux reservations—actually knew how to butcher cattle. Fortunately, a few of the journalists did know, and they took over, ensuring dinner for the starving rebels. That was a much discussed event during and after Wounded Knee. The most common reading of this was that basically we were fakes. Indians clueless about butchering livestock were not really Indians.
Smith dryly notes that the protesters “lost points” with observers after this episode, which overshadowed many of the more significant aspects of the occupation, and he concludes: “I myself know nothing about butchering cattle, and would hope that doesn’t invalidate my remarks about the global news media and human rights.”
I got to thinking about this passage in the aftermath of Elizabeth Warren’s very bad week. More specifically, I was reminded of it by a column by the Washington Post opinion writer Dana Milbank, who focuses on Warren’s submissions to the cookbook Pow Wow Chow: A Collection of Recipes from Families of the Five Civilized Tribes, which was edited by her cousin three decades ago. One of the recipes that Warren contributed was “Crab with Tomato Mayonnaise Dressing,” which leads Milbank to crack: “A traditional Cherokee dish with mayonnaise, a nineteenth-century condiment imported by settlers? A crab dish from landlocked Oklahoma? This can mean only one thing: canned crab. Warren is unfit to lead.” He’s speaking with tongue partially in cheek—a point that probably won’t be caught by thousands of people who are just browsing the headlines—but when I read these words, I thought immediately of these lines from Smith’s book:
It presents the unavoidable question: Are Indian people allowed to change? Are we allowed to invent completely new ways of being Indian that have no connection to previous ways we have lived? Authenticity for Indians is a brutal measuring device that says we are only Indian as long as we are authentic. Part of the measurement is about percentage of Indian blood. The more, the better. Fluency in one’s Indian language is always a high card. Spiritual practices, living in one’s ancestral homeland, attending powwows, all are necessary to ace the authenticity test. Yet many of us believe taking the authenticity tests is like drinking the colonizer’s Kool-Aid—a practice designed to strengthen our commitment to our own internally warped minds. In this way, we become our own prison guards.
And while there may be other issues with Warren’s recipe, it’s revealing that we often act as if the Cherokee Nation somehow ceased to evolve—or cook for itself—after the introduction of mayonnaise.
This may seem like a tiny point, but it’s also an early warning of a monstrous cultural reckoning lurking just around the corner, at at time when we might have thought that we had exhausted every possible way to feel miserable and divided. If Warren runs for president, which I hope she does, we’re going to be plunged into what Smith aptly describes as a “snake pit” that terrifies most public figures. As Smith writes in a paragraph that I never tire of quoting:
Generally speaking, smart white people realize early on, probably even as children, that the whole Indian thing is an exhausting, dangerous, and complicated snake pit of lies. And…the really smart ones somehow intuit that these lies are mysteriously and profoundly linked to the basic construction of the reality of daily life, now and into the foreseeable future. And without it ever quite being a conscious thought, these intelligent white people come to understand that there is no percentage, none, in considering the Indian question, and so the acceptable result is to, at least subconsciously, acknowledge that everything they are likely to learn about Indians in school, from books and movies and television programs, from dialogue with Indians, from Indian art and stories, from museum exhibits about Indians, is probably going to be crap, so they should be avoided.
This leads him to an unforgettable conclusion: “Generally speaking, white people who are interested in Indians are not very bright.” But that’s only because most of the others are prudent enough to stay well away—and even Warren, who is undeniably smart, doesn’t seem to have realized that this was a fight that she couldn’t possibly win.
One white person who seems unquestionably interested in Indians, in his own way, is Donald Trump. True to form, he may not be very bright, but he also displays what Newt Gingrich calls a “sixth sense,” in this case for finding a formidable opponent’s Achilles heel and hammering at it relentlessly. Elizabeth Warren is one of the most interesting people to consider a presidential run in a long time, but Trump may have already hamstrung her candidacy by zeroing in on what might look like a trivial vulnerability. And the really important point here is that if Warren’s claims about her Native American heritage turn out to be her downfall, it’s because the rest of us have never come to terms with our guilt. The whole subject is so unsettling that we’ve collectively just agreed not to talk about it, and Warren made the unforgivable mistake, a long time ago, of folding it into her biography. If she’s being punished for it now, it’s because it precipitates something that was invisibly there all along, and this may only be the beginning. Along the way, we’re going to run up against a lot of unexamined assumptions, like Milbank’s amusement at that canned crab. (As Smith reminds us: “Indians are okay, as long as they meet non-Indian expectations about Indian religious and political beliefs. And what it really comes down to is that Indians are okay as long as we don’t change too much. Yes, we can fly planes and listen to hip-hop, but we must do these things in moderation and always in a true Indian way.” And mayonnaise is definitely out.) Depending on your point of view, this issue is either irrelevant or the most important problem imaginable, and like so much else these days, it may take a moronic quip from Trump—call it the Access Hollywood principle—to catalyze a debate that more reasonable minds have postponed. In his discussion of Wounded Knee, Smith concludes: “Yes, the news media always want the most dramatic story. But I would argue there is an overlay with Indian stories that makes it especially difficult.” And we might be about to find out how difficult it really is.
The difference engine
Earlier this month, within the space of less than a day, two significant events occurred in the life of Donna Strickland, an assistant professor at the University of Waterloo. She won the Nobel Prize in Physics, and she finally got her own Wikipedia page. As the biologist and Wikipedia activist Dawn Bazely writes in an excellent opinion piece for the Washington Post:
The long delay was not for lack of trying. Last May, an editor had rejected a submitted entry on Strickland, saying the subject did not meet Wikipedia’s notability requirement. Strickland’s biography went up shortly after her award was announced. If you click on the “history” tab to view the page’s edits, you can replay the process of a woman scientist finally gaining widespread recognition, in real time.
And it isn’t an isolated problem, as Bazely points out: “According to the Wikimedia Foundation, as of 2016, only 17 percent of the reference project’s biographies were about women.” When Bazely asked some of her students to create articles on women in ecology or the sciences, she found that their efforts frequently ran headlong into Wikipedia’s editing culture: “Many of their contributions got reversed almost immediately, in what is known as a ‘drive-by deletion’…I made an entry for Kathy Martin, current president of the American Ornithological Society and a global authority on arctic and alpine grouse. Almost immediately after her page went live, a flag appeared over the top page: ‘Is this person notable enough?’”
Strickland’s case is an unusually glaring example, but it reflects a widespread issue that extends far beyond Wikipedia itself. In a blog post about the incident, Ed Erhart, a senior editorial associate at the Wikimedia foundation, notes that the original article on Strickland was rejected by an editor who stated that it lacked “published, reliable, secondary sources that are independent of the subject.” But he also raises a good point about the guidelines used to establish academic notability: “Academics may be writing many of the sources volunteer Wikipedia editors use to verify the information on Wikipedia, but they are only infrequently the subject of those same sources. And when it does occur, they usually feature men from developed nations—not women or other under-represented groups.” Bazely makes a similar observation:
We live in a world where women’s accomplishments are routinely discounted and dismissed. This occurs at every point in the academic pipeline…Across disciplines, men cite their own research more often than women do. Men give twice as many academic talks as women—engagements which give scholars a chance to publicize their work, find collaborators and build their resumes for potential promotions and job offers. Female academics tend to get less credit than males for their work on a team. Outside of academia, news outlets quote more male voices than female ones—another key venue for proving “notability” among Wikipedia editors. These structural biases have a ripple effect on our crowdsourced encyclopedia.
And this leads to an undeniable feedback effect, in which the existing sources used to establish notability are used to create Wikipedia articles, when serve as evidence of notability in the future.
Bazely argues that articles on male subjects don’t seem to be held to the same high standards as those for women, which reflects the implicit biases of its editors, the vast majority of whom are men. She’s right, but I also think that there’s a subtle historical element at play. Back during the wild west days of Wikipedia, when the community was still defining itself, the demographics of its most prolific editors were probably even less diverse than they are now. During those formative years, thousands of pages were generated under a looser set of standards, and much of that material has been grandfathered into the version that exists today. I should know, because I was a part of it. While I may not have been a member of the very first generation of Wikipedia editors—one of my friends still takes pride in the fact that he created the page for “knife”—I was there early enough to originate a number of articles that I thought were necessary. I created pages for such people as Darin Morgan and Julee Cruise, and when I realized that there wasn’t an entry for “mix tape,” I spent the better part of two days at work putting one together. By the standards of the time, I was diligent and conscientious, but very little of what I did would pass muster today. My citations were erratic, I included my own subjective commentary and evaluations along with verifiable facts, and I indulged in original research, which the site rightly discourages. Multiply this by a thousand, and you get a sense of the extent to which the foundations of Wikipedia were laid by exactly the kind of editor in his early twenties for whom writing a cultural history of the mix tape took priority over countless other deserving subjects. (It isn’t an accident that I had started thinking about mix tapes again because of Nick Hornby’s High Fidelity, which provides a scathing portrait of a certain personality type, not unlike my own, that I took for years at face value.)
And I don’t even think that I was wrong. Wikipedia is naturally skewed in favor of the enthusiasms of its users, and articles that are fun to research, write, and discuss will inevitably get more attention. But the appeal of a subject to a minority of active editors isn’t synonymous with notability, and it takes a conscious effort to correct the result, especially when it comes to the older strata of contributions. While much of what I wrote fifteen years ago has been removed or revised by other hands, a lot of it still persists, because it’s easier to monitor new edits than to systematically check pages that have been around for years. And it leaves behind a residue of the same kinds of unconscious assumptions that I’ve identified elsewhere in other forms of canonization. Wikipedia is part of our cultural background now, invisible and omnipresent, and we tend to take it for granted. (Like Google, it can be hard to research it online because its name has become a synonym for information itself. Googling “Google,” or keywords associated with it, is a real headache, and looking for information about Wikipedia—as opposed to information presented in a Wikipedia article—presents many of the same challenges.) And nudging such a huge enterprise back on course, even by a few degrees, doesn’t happen by accident. One way is through the “edit-a-thons” that often occur on Ada Lovelace Day, which is named after the mathematician whose posthumous career incidentally illustrates how historical reputations can be shaped by whoever happens to be telling the story. We think of Lovelace, who worked with Charles Babbage on the difference engine, as a feminist hero, but as recently as the early sixties, one writer could cite her as an example of genetic mediocrity: “Lord Byron’s surviving daughter, Ada, what did she produce in maturity? A system for betting on horse races that was a failure, and she died at thirty-six, shattered and deranged.” The writer was the popular novelist Irving Wallace, who is now deservedly forgotten. And the book was a bestseller about the Nobel Prize.
The Machine of Lagado
Yesterday, my wife wrote to me in a text message: “Psychohistory could not predict that Elon [Musk] would gin up a fraudulent stock buyback price based on a pot joke and then get punished by the SEC.” This might lead you to wonder about our texting habits, but more to the point, she was right. Psychohistory—the fictional science of forecasting the future developed by Isaac Asimov and John W. Campbell in the Foundation series—is based on the assumption that the world will change in the future more or less as it has in the past. Like all systems of prediction, it’s unable to foresee black swans, like the Mule or Donald Trump, that make nonsense of our previous assumptions, and it’s useless for predicting events on a small scale. Asimov liked to compare it to the kinetic theory of gases, “where the individual molecules in the gas remain as unpredictable as ever, but the average person is completely predictable.” This means that you need a sufficiently large number of people, such as the population of the galaxy, for it to work, and it also means that it grows correspondingly less useful as it becomes more specific. On the individual level, human behavior is as unforeseeable as the motion of particular molecules, and the shape of any particular life is impossible to predict, even if we like to believe otherwise. The same is true of events. Just as a monkey or a dartboard might do an equally good job of picking stocks as a qualified investment advisor, the news these days often seems to have been generated by a bot, like the Subreddit Simulator, that automatically cranks out random combinations of keywords and trending terms. (My favorite recent example is an actual headline from the Washington Post: “Border Patrol agent admits to starting wildfire during gender-reveal party.”)
And the satirical notion that combining ideas at random might lead to useful insights or predictions is a very old one. In Gulliver’s Travels, Jonathan Swift describes an encounter with a fictional machine—located in the academy of Lagado, the capital city of the island of Balnibarbi—by which “the most ignorant person, at a reasonable charge, and with a little bodily labour, might write books in philosophy, poetry, politics, laws, mathematics, and theology, without the least assistance from genius or study.” The narrator continues:
[The professor] then led me to the frame, about the sides, whereof all his pupils stood in ranks. It was twenty feet square, placed in the middle of the room. The superfices was composed of several bits of wood, about the bigness of a die, but some larger than others. They were all linked together by slender wires. These bits of wood were covered, on every square, with paper pasted on them; and on these papers were written all the words of their language, in their several moods, tenses, and declensions; but without any order…The pupils, at his command, took each of them hold of an iron handle, whereof there were forty fixed round the edges of the frame; and giving them a sudden turn, the whole disposition of the words was entirely changed. He then commanded six-and-thirty of the lads, to read the several lines softly, as they appeared upon the frame; and where they found three or four words together that might make part of a sentence, they dictated to the four remaining boys, who were scribes.
And Gulliver concludes: “Six hours a day the young students were employed in this labour; and the professor showed me several volumes in large folio, already collected, of broken sentences, which he intended to piece together, and out of those rich materials, to give the world a complete body of all arts and sciences.”
Two and a half centuries later, an updated version of this machine figured in Umberto Eco’s novel Foucault’s Pendulum, which is where I first encountered it. The book’s three protagonists, who work as editors for a publishing company in Milan, are playing in the early eighties with their new desktop computer, which they’ve nicknamed Abulafia, after the medieval cabalist. One speaks proudly of Abulafia’s usefulness in generating random combinations: “All that’s needed is the data and the desire. Take, for example, poetry. The program asks you how many lines you want in the poem, and you decide: ten, twenty, a hundred. Then the program randomizes the line numbers. In other words, a new arrangement each time. With ten lines you can make thousands and thousands of random poems.” This gives the narrator an idea:
What if, instead, you fed it a few dozen notions taken from the works of [occult writers]—for example, the Templars fled to Scotland, or the Corpus Hermeticum arrived in Florence in 1460—and threw in a few connective phrases like “It’s obvious that” and “This proves that?” We might end up with something revelatory. Then we fill in the gaps, call the repetitions prophecies, and—voila—a hitherto unpublished chapter of the history of magic, at the very least!
Taking random sentences from unpublished manuscripts, they enter such lines as “Who was married at the feast of Cana?” and “Minnie Mouse is Mickey’s fiancee.” When strung together, the result, in one of Eco’s sly jokes, is a conspiracy theory that exactly duplicates the thesis of Holy Blood, Holy Grail, which later provided much of the inspiration for The Da Vinci Code. “Nobody would take that seriously,” one of the editors says. The narrator replies: “On the contrary, it would sell a few hundred thousand copies.”
When I first read this as a teenager, I thought it was one of the great things in the world, and part of me still does. I immediately began to look for similar connections between random ideas, which led me to some of my best story ideas, and I still incorporate aspects of randomness into just about everything that I do. Yet there’s also a pathological element to this form of play that I haven’t always acknowledged. What makes it dangerous, as Eco understood, is the inclusion of such seemingly innocent expressions as “it’s obvious that” and “this proves that,” which instantly transforms a scenario into an argument. (On the back cover of the paperback edition of Foucault’s Pendulum, the promotional copy describes Abulafia as “an incredible computer capable of inventing connections between all their entires,” which is both a great example of hyping a difficult book and a reflection of how credulous we can be when it comes to such practices in real life.) We may not be able to rule out any particular combination of events, but not every explanatory system is equally valid, even if all it takes is a modicum of ingenuity to turn it into something convincing. I used to see the creation of conspiracy theories as a diverting game, or as a commentary on how we interpret the world around us, and I devoted an entire novel to exorcising my fascination with this idea. More recently, I’ve realized that this attitude was founded on the assumption that it was still possible to come to some kind of cultural consensus about the truth. In the era of InfoWars, Pizzagate, and QAnon, it no longer seems harmless. Not all patterns are real, and many of the horrors of the last century were perpetuated by conspiracy theorists who arbitrarily seized on one arrangement of the facts—and then acted on it accordingly. Reality itself can seem randomly generated, but our thoughts and actions don’t need to be.
The paper of record
One of my favorite conventions in suspense fiction is the trope known as Authentication by Newspaper. It’s the moment in a movie, novel, or television show—and sometimes even in reality—when the kidnapper sends a picture of the victim holding a copy of a recent paper, with the date and headline clearly visible, as a form of proof of life. (You can also use it with piles of illicit cash, to prove that you’re ready to send payment.) The idea frequently pops up in such movies as Midnight Run and Mission: Impossible 2, and it also inspired a classic headline from The Onion: “Report: Majority Of Newspapers Now Purchased By Kidnappers To Prove Date.” It all depends on the fact that a newspaper is a datable object that is widely available and impossible to fake in advance, which means that it can be used to definitively establish the earliest possible day in which an event could have taken place. And you can also use the paper to verify a past date in subtler ways. A few weeks ago, Motherboard had a fascinating article on a time-stamping service called Surety, which provides the equivalent of a dated seal for digital documents. To make it impossible to change the date on one of these files, every week, for more than twenty years, Surety has generated a public hash value from its internal client database and published it in the classified ad section of the New York Times. As the company notes: “This makes it impossible for anyone—including Surety—to backdate timestamps or validate electronic records that were not exact copies of the original.”
I was reminded of all this yesterday, after the Times posted an anonymous opinion piece titled “I Am Part of the Resistance Inside the Trump Administration.” The essay, which the paper credits to “a senior official,” describes what amounts to a shadow government within the White House devoted to saving the president—and the rest of the country—from his worst impulses. And while the author may prefer to remain nameless, he certainly doesn’t suffer from a lack of humility:
Many of the senior officials in [Trump’s] own administration are working diligently from within to frustrate parts of his agenda and his worst inclinations. I would know. I am one of them…It may be cold comfort in this chaotic era, but Americans should know that there are adults in the room. We fully recognize what is happening. And we are trying to do what’s right even when Donald Trump won’t.
The result, he claims, is “a two-track presidency,” with a group of principled advisors doing their best to counteract Trump’s admiration for autocrats and contempt for international relations: “This isn’t the work of the so-called deep state. It’s the work of the steady state.” He even reveals that there was early discussion among cabinet members of using the Twenty-Fifth Amendment to remove Trump from office, although it was scuttled by concern of precipitating a crisis somehow worse than the one in which we’ve found ourselves.
Not surprisingly, the piece has generated a firestorm of speculation about the author’s identity, both online and in the White House itself, which I won’t bother covering here. What interests me are the writer’s reasons for publishing it in the first place. Over the short term, it can only destabilize an already volatile situation, and everyone involved will suffer for it. This implies that the author has a long game in mind, and it had better be pretty compelling. On Twitter, Nate Silver proposed one popular theory: “It seems like the person’s goal is to get outed and secure a very generous advance on a book deal.” He may be right—although if that’s the case, the plan has quickly gone sideways. Reaction on both sides has been far more critical than positive, with Erik Wemple of the Washington Post perhaps putting it best:
Like most anonymous quotes and tracts, this one is a PR stunt. Mr. Senior Administration Official gets to use the distributive power of the New York Times to recast an entire class of federal appointees. No longer are they enablers of a foolish and capricious president. They are now the country’s most precious and valued patriots. In an appearance on Wednesday afternoon, the president pronounced it all a “gutless” exercise. No argument here.
Or as the political blogger Charles P. Pierce says even more savagely in his response on Esquire: “Just shut up and quit.”
But Wemple’s offhand reference to “the distributive power” of the Times makes me think that the real motive is staring us right in the face. It’s a form of Authentication by Newspaper. Let’s say that you’re a senior official in the Trump administration who knows that time is running out. You’re afraid to openly defy the president, but you also want to benefit—or at least to survive—after the ship goes down. In the aftermath, everyone will be scrambling to position themselves for some kind of future career, even though the events of the last few years have left most of them irrevocably tainted. By the time it falls apart, it will be too late to claim that you were gravely concerned. But the solution is a stroke of genius. You plant an anonymous piece in the Times, like the founders of Surety publishing its hash value in the classified ads, except that your platform is vastly more prominent. And you place it there precisely so that you can point to it in the future. After Trump is no longer a threat, you can reveal yourself, with full corroboration from the paper of record, to show that you had the best interests of the country in mind all along. You were one of the good ones. The datestamp is right there. That’s your endgame, no matter how much pain it causes in the meantime. It’s brilliant. But it may not work. As nearly everyone has realized by now, the fact that a “steady state” of conservatives is working to minimize the damage of a Trump presidency to achieve “effective deregulation, historic tax reform, a more robust military and more” is a scandal in itself. This isn’t proof of life. It’s the opposite.
From Montgomery to Bilbao
On August 16, 2016, the Equal Justice Initiative, a legal rights organization, unveiled its plans for the National Memorial for Peace and Justice, which would be constructed in Montgomery, Alabama. Today, less than two years later, it opens to the public, and the timing could hardly seem more appropriate, in ways that even those who conceived of it might never have imagined. As Campbell Robertson writes for the New York Times:
At the center is a grim cloister, a walkway with eight hundred weathered steel columns, all hanging from a roof. Etched on each column is the name of an American county and the people who were lynched there, most listed by name, many simply as “unknown.” The columns meet you first at eye level, like the headstones that lynching victims were rarely given. But as you walk, the floor steadily descends; by the end, the columns are all dangling above, leaving you in the position of the callous spectators in old photographs of public lynchings.
And the design represents a breakthrough in more ways than one. As the critic Philip Kennicott points out in the Washington Post: “Even more remarkable, this memorial…was built on a budget of only $15 million, in an age when major national memorials tend to cost $100 million and up.”
Of course, if the memorial had been more costly, it might not exist at all, and certainly not with the level of independence and the clear point of view that it expresses. Yet if there’s one striking thing about the coverage of the project, it’s the absence of the name of any one architect or designer. Neither of these two words even appears in the Times article, and in the Post, we only read that the memorial was “designed by [Equal Justice Initiative founder Bryan] Stevenson and his colleagues at EJI in collaboration with the Boston-based MASS Design Group.” When you go to the latter’s official website, twelve people are credited as members of the project design team. This is markedly different from the way in which we tend to talk about monuments, museums, and other architectural works that are meant to invite our attention. In many cases, the architect’s identity is a selling point in itself, as it invariably is with Frank Gehry, whose involvement in a project like the Guggenheim Museum Bilbao is consciously intended to rejuvenate an entire city. In Montgomery, by contrast, the designer is essentially anonymous, or part of a collaboration, which seems like an aesthetic choice as conscious as the design of the space itself. The individual personality of the architect departs, leaving the names and events to testify on their own behalf. Which is exactly as it should be.
And it’s hard not to compare this to the response to the design of the Vietnam Veterans Memorial in 1981. The otherwise excellent documentary by Ken Burns and Lynn Novick alludes to the firestorm that it caused, but it declines to explore how much of the opposition was personal in nature. As James Reston, Jr. writes in the definitive study A Rift in the Earth:
After Maya Lin’s design was chosen and announced, the public reaction was intense. Letters from outraged veterans poured into the Memorial Fund office. One claimed that Lin’s design had “the warmth and charm of an Abyssinian dagger.” “Nihilistic aesthetes” had chosen it…Predictably, the names of incendiary antiwar icons, Jane Fonda and Abbie Hoffman, were invoked as cheering for a design that made a mockery of the Vietnam dead…As for the winner with Chinese ancestry, [donor H. Ross] Perot began referring to her as “egg roll.”
If anything, the subject matter of the National Memorial for Peace and Justice is even more fraught, and the decision to place the designers in the background seems partially intended to focus the conversation on the museum itself, and not on those who made it.
Yet there’s a deeper lesson here about architecture and its creators. At first, you might think that a building with a singular message would need to arise from—or be identified with—an equally strong personality, but if anything, the trend in recent years has gone the other way. As Reinier de Graaf notes in Four Walls and a Roof, one of the more curious developments over the last few decades is the way in which celebrity architects, like Frank Gehry, have given up much of their own autonomy for the sake of unusual forms that no human hand or brain could properly design:
In partially delegating the production of form to the computer, the antibox has seemingly boosted the production of extravagant shapes beyond any apparent limits. What started as a deliberate meditation on the notion of form in the early antibodies has turned into a game of chance. Authorship has become relative: with creation now delegated to algorithms, the antibox’s main delight is the surprise it causes to the designers.
Its opposite number is the National Memorial for Peace and Justice, which was built with simple materials and techniques that rely for their impact entirely on the insight, empathy, and ingenuity of the designer, who then quietly fades away. The architect can afford to disappear, because the work speaks for those who are unable to speak for themselves. And that might be the most powerful message of all.
Checks and balances
About a third of the way through my upcoming book, while discussing the May 1941 issue of Astounding Science Fiction, I include the sentence: “The issue also featured Heinlein’s “Universe,” which was based on Campbell’s premise about a lost generation starship.” My copy editor amended this to “a lost-generation starship,” to which I replied: “This isn’t a ‘lost-generation’ starship, but a generation starship that happens to be lost.” And the exchange gave me a pretty good idea for a story that I’ll probably never write. (I don’t really have a plot for it yet, but it would be about Hemingway and Fitzgerald on a trip to Alpha Centauri, and it would be called The Double Sun Also Rises.) But it also reminded me of one of the benefits of a copy edit, which is its unparalleled combination of intense scrutiny and total detachment. I sent drafts of the manuscript to some of the world’s greatest nitpickers, who saved me from horrendous mistakes, and the result wouldn’t be nearly as good without their advice. But there’s also something to be said for engaging the services of a diligent reader who doesn’t have any connection to the subject. I deliberately sought out feedback from a few people who weren’t science fiction fans, just to make sure that it remained accessible to a wider audience. And the ultimate example is the copy editor, who is retained to provide an impartial consideration of every semicolon without any preconceived notions outside the text. It’s what Heinlein might have had in mind when he invented the Fair Witness, who said when asked about the color of a nearby house: “It’s white on this side.”
But copy editors are human beings, not machines, and they occasionally get their moment in the spotlight. Recently, their primary platform has been The New Yorker, which has been quietly highlighting the work of its copy editors and fact checkers over the last few years. We can trace this tendency back to Between You & Me, a memoir by Mary Norris that drew overdue attention to the craft of copy editing. In “Holy Writ,” a delightful excerpt in the magazine, Norris writes of the supposed objectivity and rigor of her profession: “The popular image of the copy editor is of someone who favors rigid consistency. I don’t usually think of myself that way. But, when pressed, I do find I have strong views about commas.” And she says of their famous detachment:
There is a fancy word for “going beyond your province”: “ultracrepidate.” So much of copy editing is about not going beyond your province. Anti-ultracrepidationism. Writers might think we’re applying rules and sticking it to their prose in order to make it fit some standard, but just as often we’re backing off, making exceptions, or at least trying to find a balance between doing too much and doing too little. A lot of the decisions you have to make as a copy editor are subjective. For instance, an issue that comes up all the time, whether to use “that” or “which,” depends on what the writer means. It’s interpretive, not mechanical—though the answer often boils down to an implicit understanding of commas.
In order to be truly objective, in other words, you have to be a little subjective. Which equally true of writing as a whole.
You could say much the same of the fact checker, who resembles the copy editor’s equally obsessive cousin. As a rule, books aren’t fact-checked, which is a point that we only seem to remember when the system breaks down. (Astounding was given a legal read, but I was mostly on my own when it came to everything else, and I’m grateful that some of the most potentially contentious material—about L. Ron Hubbard’s writing career—drew on an earlier article that was brilliantly checked by Matthew Giles of Longreads.) As John McPhee recently wrote of the profession:
Any error is everlasting. As Sara [Lippincott] told the journalism students, once an error gets into print it “will live on and on in libraries carefully catalogued, scrupulously indexed…silicon-chipped, deceiving researcher after researcher down through the ages, all of whom will make new errors on the strength of the original errors, and so on and on into an exponential explosion of errata.” With drawn sword, the fact-checker stands at the near end of this bridge. It is, in part, why the job exists and why, in Sara’s words, a publication will believe in “turning a pack of professional skeptics loose on its own galley proofs.”
McPhee continues: “Book publishers prefer to regard fact-checking as the responsibility of authors, which, contractually, comes down to a simple matter of who doesn’t pay for what. If material that has appeared in a fact-checked magazine reappears in a book, the author is not the only beneficiary of the checker’s work. The book publisher has won a free ticket to factual respectability.” And its absence from the publishing process feels like an odd evolutionary vestige of the book industry that ought to be fixed.
As a result of such tributes, the copy editors and fact checkers of The New Yorker have become cultural icons in themselves, and when an error does make it through, it can be mildly shocking. (Last month, the original version of a review by Adam Gopnik casually stated that Andrew Lloyd Webber was the composer of Chess, and although I knew perfectly well that this was wrong, I had to look it up to make sure that I hadn’t strayed over into a parallel universe.) And their emergence at this particular moment may not be an accident. The first installment of “Holy Writ” appeared on February 23, 2015, just a few months before Donald Trump announced that he was running for president, plunging us all into world in which good grammar and factual accuracy can seem less like matters of common decency than obstacles to be obliterated. Even though the timing was a coincidence, it’s tempting to read our growing appreciation for these unsung heroes as a statement about the importance of the truth itself. As Alyssa Rosenberg writes in the Washington Post:
It’s not surprising that one of the persistent jokes from the Trump era is the suggestion that we’re living in a bad piece of fiction…Pretending we’re all minor characters in a work of fiction can be a way of distancing ourselves from the seeming horror of our time or emphasizing our own feelings of powerlessness, and pointing to “the writers” often helps us deny any responsibility we may have for Trump, whether as voters or as journalists who covered the election. But whatever else we’re doing when we joke about Trump and the swirl of chaos around him as fiction, we’re expressing a wish that this moment will resolve in a narratively and morally comprehensible fashion.
Perhaps we’re also hoping that reality itself will have a fact checker after all, and that the result will make a difference. We don’t know if it will yet. But I’m hopeful that we’ll survive the exponential explosion of errata.
The war of ideas
Over the last few days, I’ve been thinking a lot about a pair of tweets. One is from Susan Hennessy, an editor for the national security blog Lawfare, who wrote: “Much of my education has been about grasping nuance, shades of gray. Resisting the urge to oversimplify the complexity of human motivation. This year has taught me that, actually, a lot of what really matters comes down to good people and bad people. And these are bad people.” This is a remarkable statement, and in some ways a heartbreaking one, but I can’t disagree with it, and it reflects a growing trend among journalists and other commentators to simply call what we’re seeing by its name. In response to the lies about the students of Marjory Stoneman Douglas High School—including the accusation that some of them are actors—Margaret Sullivan of the Washington Post wrote:
When people act like cretins, should they be ignored? Does talking about their misdeeds merely give them oxygen? Maybe so. But the sliming—there is no other word for it—of the survivors of last week’s Florida high school massacre is beyond the pale…Legitimate disagreement over policy issues is one thing. Lies, conspiracy theories and insults are quite another.
And Paul Krugman went even further: “America in 2018 is not a place where we can disagree without being disagreeable, where there are good people and good ideas on both sides, or whatever other bipartisan homily you want to recite. We are, instead, living in a kakistocracy, a nation ruled by the worst, and we need to face up to that unpleasant reality.”
The other tweet that has been weighing on my mind was from Rob Goldman, a vice president of advertising for Facebook. It was just one of a series of thoughts—which is an important detail in itself—that he tweeted out on the day that Robert Mueller indicted thirteen Russian nationals for their roles in interfering in the presidential election. After proclaiming that he was “very excited” to see the indictments, Goldman said that he wanted to clear up a few points. He had seen “all of the Russian ads” that appeared on Facebook, and he stated: “I can say very definitively that swaying the election was not the main goal.” But his most memorable words, at least for me, were: “The majority of the Russian ad spend happened after the election. We shared that fact, but very few outlets have covered it because it doesn’t align with the main media narrative of Tump [sic] and the election.” This is an astounding statement, in part because it seems to defend Facebook by saying that it kept running these ads for longer than most people assume. But it’s also inexplicable. It may well be, as some observers have contended, that Goldman had a “nuanced” point to make, but he chose to express it on a forum that is uniquely vulnerable to being taken out of context, and to unthinkingly use language that was liable to be misinterpreted. As Josh Marshall wrote:
[Goldman] even apes what amounts to quasi-Trumpian rhetoric in saying the media distorts the story because the facts “don’t align with the main media narrative of Trump and the election.” This is silly. Elections are a big deal. It’s hardly surprising that people would focus on the election, even though it’s continued since. What is this about exactly? Is Goldman some kind of hardcore Trumper?
I don’t think he is. But it also doesn’t matter, at least not when his thoughts were retweeted approvingly by the president himself.
This all leads me to a point that the events of the last week have only clarified. We’re living in a world in which the lines between right and wrong seem more starkly drawn than ever, with anger and distrust rising to an unbearable degree on both sides. From where I stand, it’s very hard for me to see how we recover from this. When you can accurately say that the United States has become a kakistocracy, you can’t just go back to the way things used to be. Whatever the outcome of the next election, the political landscape has been altered in ways that would have been unthinkable even two years ago, and I can’t see it changing during my lifetime. But even though the stakes seem clear, the answer isn’t less nuance, but more. If there’s one big takeaway from the last eighteen months, it’s that the line between seemingly moderate Republicans and Donald Trump was so evanescent that it took only the gentlest of breaths to blow it away. It suggests that we were closer to the precipice than we ever suspected, and unpacking that situation—and its implications for the future—requires more nuance than most forms of social media can provide. Rob Goldman, who should have known better, didn’t grasp this. And while I hope that the students at Marjory Stoneman Douglas do better, I also worry about how effective they can really be. Charlie Warzel of Buzzfeed recently argued that the pro-Trump media has met its match in the Parkland students: “It chose a political enemy effectively born onto the internet and innately capable of waging an information war.” I want to believe this. But it may also be that these aren’t the weapons that we need. The information war is real, but the only way to win it may be to move it into another battlefield entirely.
Which brings us, in a curious way, back to Robert Mueller, who seems to have assumed the same role for many progressives that Nate Silver once occupied—the one man who was somehow going to tell us that everything was going to be fine. But their differences are also telling. Silver generated reams of commentary, but his reputation ultimately came down to his ability to provide a single number, updated in real time, that would indicate how worried we had to be. That trust is clearly gone, and his fall from grace is less about his own mistakes than it’s an overdue reckoning for the promises of data journalism in general. Mueller, by contrast, does everything in private, avoids the spotlight, and emerges every few months with a mountain of new material that we didn’t even know existed. It’s nuanced, qualitative, and not easy to summarize. As the coverage endlessly reminds us, we don’t know what else the investigation will find, but that’s part of the point. At a time in which controversies seem to erupt overnight, dominate the conversation for a day, and then yield to the next morning’s outrage, Mueller embodies the almost anachronistic notion that the way to make something stick is to work on it diligently, far from the public eye, and release each piece only when you’re ready. (In the words of a proverbial saying attributed to everyone from Buckminster Fuller to Michael Schrage: “Never show fools unfinished work.” And we’re all fools these days.) I picture him fondly as the head of a monastery in the Dark Ages, laboriously preserving information for the future, or even as the shadowy overseer of Asimov’s Foundation. Mueller’s low profile allows him to mean whatever we want to us, of course, and for all I know, he may not be the embodiment of all the virtues that Ralph Waldo Emerson identified as punctuality, personal attention, courage, and thoroughness. I just know that he’s the only one left who might be. Mueller can’t save us by himself. But his example might just show us the way.
The Hedgehog, the Fox, and the Fatted Ram, Part 1
Over the long weekend, both the New York Times and the Washington Post published lead articles on the diminishing public profile of Jared Kushner. The timing may have been a coincidence, but the pieces had striking similarities. Both made the argument that Kushner’s portfolio, once so vast, has been dramatically reduced by the arrival on the scene of White House chief of staff John F. Kelly; both ran under a headline that inclined some version of the word “shrinking”; and both led off with memorable quotes from their subject. In the Times, it was Kushner’s response when asked by Reince Priebus what his Office of American Innovation would really do: “What do you care?” (The newspaper of record, proper as ever, added: “He emphasized his point with an expletive.”) Meanwhile, the Post, which actually scored an interview, came away with something even stranger. Here’s what Kushner said of himself:
During the campaign, I was more like a fox than a hedgehog. I was more of a generalist having to learn about and master a lot of skills quickly. When I got to D.C., I came with an understanding that the problems here are so complex—and if they were easy problems, they would have been fixed before—and so I became more like the hedgehog, where it was more taking issues you care deeply about, going deep and devoting the time, energy and resources to trying to drive change.
The Post merely noted that this is Kushner’s “version the fable of the fox, who knows many things, and the hedgehog, who knows one important thing,” but as the Washington Examiner pointed out, the real source is Isaiah Berlin’s classic book The Hedgehog and the Fox, which draws its famous contrast between foxes and hedgehogs as a prelude to a consideration of Leo Tolstoy’s theory of history.
Berlin’s book, which is one of my favorites, is so unlike what I’d expect Jared Kushner to be reading that I can’t resist trying to figure out what this reference to it means. If I were conspiratorially minded, I’d observe that if Kushner had wanted to put together a reading list to quickly bring himself up to speed on the history and culture of Russia—I can’t imagine why—then The Hedgehog and the Fox, which can be absorbed in a couple of hours, would be near the top. But the truth, unfortunately, is probably more prosaic. If there’s a single book from the last decade that Kushner, who was briefly touted as the prodigy behind Trump’s data operation, can be assumed to have read, or at least skimmed, it’s Nate Silver’s The Signal and the Noise. And Silver talks at length about the supposed contrast between foxes and hedgehogs, courtesy of a professor of psychology and political science named Philip E. Tetlock, who conducted a study of predictions by experts in various fields:
Tetlock was able to classify his experts along a spectrum between what he called hedgehogs and foxes. The reference to hedgehogs and foxes comes from the title of an Isaiah Berlin essay on the Russian novelist Leo Tolstoy—The Hedgehog and the Fox…Foxes, Tetlock found, are considerably better at forecasting than hedgehogs. They had come closer to the mark on the Soviet Union, for instance. Rather than seeing the USSR in highly ideological terms—as an intrinsically “evil empire,” or as a relatively successful (and perhaps even admirable) example of a Marxist economic system—they instead saw it for what it was: an increasingly dysfunctional nation that was in danger of coming apart at the seams. Whereas the hedgehogs’ forecasts were barely any better than random chance, the foxes’ demonstrated predictive skill.
As intriguing as we might find this reference to Russia, which Kushner presumably read, it also means that in all likelihood, he never even opened Berlin’s book. (Silver annoyingly writes: “Unless you are a fan of Tolstoy—or of flowery prose—you’ll have no particular reason to read Berlin’s essay.”) But it doesn’t really matter where he encountered these classifications. As much as I love the whole notion of the hedgehog and the fox, it has one big problem—as soon as you read it, you’re immediately tempted to apply it to yourself, as Kushner does, when in fact its explanatory power applies only to geniuses. Like John Keats’s celebrated concept of negative capability, which is often used to excuse sloppy, inconsistent thinking, Berlin’s essay encourages us to think of ourselves as foxes or hedgehogs, when we’re really just dilettantes or suffering from tunnel vision. And this categorization has its limits even when applied to unquestionably exceptional personalities. Here’s how Berlin lays it out on the very first page of his book:
There exists a great chasm between those, on one side, who relate everything to a single central vision, one system less or more coherent or articulate, in terms of which they understand, think and feel—a single, universal, organizing principle in terms of which alone all that they are and say has significance—and, on the other side, those who pursue many ends, often unrelated and even contradictory, connected, if at all, only in some de facto way, for some psychological or physiological cause, related by no moral or aesthetic principle; these last lead lives, perform acts, and entertain ideas that are centrifugal rather than centripetal, their thought is scattered or diffused, moving on many levels…without, consciously or unconsciously, seeking to fit [experiences and objects] into, or exclude them from, any one unchanging, all-embracing, sometimes self-contradictory and incomplete, at times fanatical, unitary inner vision.
The contrast that Berlin draws here could hardly seem more stark, but it falls apart as soon as we apply it to, say, Kushner’s father-in-law. On the one hand, Trump has succeeded beyond his wildest dreams by harping monotonously on a handful of reliable themes, notably white nationalism, xenophobia, and resentment of liberal elites. Nothing could seem more like the hedgehog. On the other hand, from one tweet to the next, he’s nothing if not “centrifugal rather than centripetal,” driven by his impulses, embracing contradictory positions, undermining his own surrogates, and resisting all attempts to pin him down to a conventional ideology. It’s all very foxlike. The most generous reading would be to argue that Trump, as Berlin contends of Tolstoy, is “by nature a fox, but [believes] in being a hedgehog,” a comparison that seems ridiculous even as I type it. It’s far more plausible that Trump lacks the intellectual rigor, or even the basic desire, to assemble anything like a coherent politics out of his instinctive drives for power and revenge. Like most of us, he’s a mediocre thinker, and his confusions, which reflect those of his base, have gone a long way toward enabling his rise. Trump bears much the same relationship to his fans that Emerson saw in the man who obsessed Tolstoy so deeply:
Among the eminent persons of the nineteenth century, Bonaparte is far the best known and the most powerful; and owes his predominance to the fidelity with which he expresses the tone of thought and belief, the aims of the masses…If Napoleon is France, if Napoleon is Europe, it is because the people whom he sways are little Napoleons.
Faced with a Trump, little or big, Berlin’s categories lose all meaning—not out of any conceptual weakness, but because it wasn’t what they were designed to do. But that doesn’t mean that Berlin doesn’t deserve our attention. In fact, The Hedgehog and the Fox has more to say about our current predicament than any other book I know, and if Kushner ever bothered to read it, it might give him reason to worry. I’ll have more to say about this tomorrow.
Bringing up the bodies
For the last few weeks, my wife and I have been slowly working our way through Ken Burns and Lynn Novick’s devastating documentary series Vietnam. The other night, we finished the episode “Resolve,” which includes an extraordinary sequence—you can find it here around the twenty-five minute mark—about the war’s use of questionable metrics. As narrator Peter Coyote intones: “Since there was no front in Vietnam, as there had been in the first and second World Wars, since no ground was ever permanently won or lost, the American military command in Vietnam—MACV—fell back more and more on a single grisly measure of supposed success: counting corpses. Body count.” The historian and retired Army officer James Willbanks observes:
The problem with the war, as it often is, are the metrics. It is a situation where if you can’t count what’s important, you make what you can count important. So, in this particular case, what you could count was dead enemy bodies.
And as the horrifying images of stacked bodies fill the screen, we hear the quiet, reasonable voice of Robert Gard, a retired lieutenant general and former chairman of the board of the Center for Arms Control and Non-Proliferation: “If body count is the measure of success, then there’s the tendency to count every body as an enemy soldier. There’s a tendency to want to pile up dead bodies and perhaps to use less discriminate firepower than you otherwise might in order to achieve the result that you’re charged with trying to obtain.”
These days, we casually use the phrase “body count” to describe violence in movies and video games, and I was startled to realize how recent the term really is—the earliest reported instance is from 1962, and the oldest results that I can find in a Google Book search are from the early seventies. (Its first use as a book’s title, as far as I can determine, is for the memoir of William Calley, the officer convicted of murder for his involvement in the My Lai massacre.) Military metaphors have a way of seeping into everyday use, in part because of their vividness and, perhaps, because we all like to think of ourselves as fighting in one war or another, but after watching Vietnam, I think that “body count” ought to be forcibly restored to its original connotations. It doesn’t take a lot of introspection to see that it was a statistic that was only possible in a war in which the enemy could be easily dehumanized, and that it encouraged a lack of distinction between military and civilian combatants. Like most faulty metrics, it created a toxic set of incentives from the highest levels of command to the soldiers on the ground. As the full extent of the war’s miscalculations grew more clear, these facts became hard to ignore, and the term itself came to encapsulate the mistakes and deceptions of the conflict as a whole. Writing in Playboy in 1982, Philip Caputo called it “one of the most hideous, morally corrupting ideas ever conceived by the military mind.” Yet most of its emotional charge has since been lost. Words matter, and as the phrase’s significance is obscured, the metric itself starts to creep back. And the temptation to fall back on it increases in response to a confluence of specific factors, as a country engages in military action in which the goals are unclear and victory is poorly defined.
As a result, it’s no surprise that we’re seeing a return to body count. As far back as 2005, Bradley Graham of the Washington Post reported: “The revival of body counts, a practice discredited during the Vietnam War, has apparently come without formal guidance from the Pentagon’s leadership.” More recently, Reed Richardson wrote on FAIR:
In the past few years, official body count estimates have made a notable comeback, as U.S. military and administration officials have tried to talk up the U.S. coalition’s war against ISIS in Syria and Iraq…For example, last August, the U.S. commander of the Syrian-Iraq war garnered a flurry of favorable coverage of the war when he announced that the coalition had killed 45,000 ISIS militants in the past two years. By December, the official ISIS body count number, according to an anonymous “senior U.S. official,” had risen to 50,000 and led headlines on cable news. Reading through that media coverage, though, one finds little skepticism about the figures or historical context about how these killed in action numbers line up with the official estimates of ISIS’s overall size, which have stayed stubbornly consistent year after year. In fact, the official estimated size of ISIS in 2015 and 2016 averaged 25,000 fighters, which means the U.S. coalition had supposedly wiped out the equivalent of its entire force over both years without making a dent in its overall size.
Richardson sums up: “As our not-too-distant past has clearly shown, enemy body counts are a handy, hard-to-resist tool that administrations of both parties often use for war propaganda to promote the idea we are ‘winning’ and to stave off dissent about why we’re fighting in the first place.”
It’s worth pointing out, as Richardson does, that such language isn’t confined to any one party, and it was equally prevalent during the Obama administration. But we should be even more wary of it now. (Richardson writes: “In February, Gen. Tony Thomas, the commander of US Special Operations Command, told a public symposium that 60,000 ISIS fighters had been killed. Thomas added this disingenuous qualifier to his evidence-free number: ‘I’m not that into morbid body count, but that matters.’”) Trump has spent his entire career inflating his numbers, from his net worth to the size of his inauguration crowds, and because he lacks a clear grasp of policy, he’s more inclined to gauge his success—and the lack thereof by his enemies—in terms that lend themselves to the most mindless ways of keeping score, like television ratings. He’s also fundamentally disposed to claim that everything that he does is the biggest and the best, in the face of all evidence to the contrary. This extends to areas that can’t be easily quantified, like international relations, so that every negotiation becomes a zero-sum game in which, as Joe Nocera put it a few years ago: “In every deal, he has to win and you have to lose.” It encourages Trump and his surrogates to see everything as a war, even if it leads them to inflict just as much damage on themselves, and the incentives that he imposes on those around him, in which no admission of error is possible, drag down even the best of his subordinates. And we’ve seen this pattern before. As the journalist Joe Galloway says in Vietnam: “You don’t get details with a body count. You get numbers. And the numbers are lies, most of ‘em. If body count is your success mark, then you’re pushing otherwise honorable men, warriors, to become liars.”
The art of obfuscation
In the book Obfuscation: A User’s Guide for Privacy and Protest, a long excerpt of which recently appeared on Nautilus, the academics Finn Brunton and Helen Nissenbaum investigate the ways in which short-term sources of distraction can be used to conceal or obscure the truth. One of their most striking examples is drawn from The Art of Political Murder by Francisco Goldman, which recounts the aftermath of the brutal assassination of the Guatemalan bishop Juan José Gerardi Conedera. Brunton and Nissenbaum write:
As Goldman documented the long and dangerous process of bringing at least a few of those responsible within the Guatemalan military to justice for this murder, he observed that those threatened by the investigation didn’t merely plant evidence to conceal their role. Framing someone else would be an obvious tactic, and the planted evidence would be assumed to be false. Rather, they produced too much conflicting evidence, too many witnesses and testimonials, too many possible stories. The goal was not to construct an airtight lie, but rather to multiply the possible hypotheses so prolifically that observers would despair of ever arriving at the truth. The circumstances of the bishop’s murder produced what Goldman terms an “endlessly exploitable situation,” full of leads that led nowhere and mountains of seized evidence, each factual element calling the others into question. “So much could be made and so much would be made to seem to connect,” Goldman writes, his italics emphasizing the power of the ambiguity.
What interests me the most about this account is how the players took the existing features of this “endlessly exploitable situation,” which was already too complicated for any one person to easily understand, and simply turned up the volume. They didn’t need to create distractions out of nothing—they just had to leverage and intensify what was naturally there. It’s a clever strategy, because it only needs to last long for enough to run out the clock until the possibility of any real investigation has diminished. Brunton and Nissenbaum draw a useful analogy to the concept of “chaff” in radar countermeasures:
During World War II, a radar operator tracks an airplane over Hamburg, guiding searchlights and anti-aircraft guns in relation to a phosphor dot whose position is updated with each sweep of the antenna. Abruptly, dots that seem to represent airplanes begin to multiply, quickly swamping the display. The actual plane is in there somewhere, impossible to locate owing to the presence of “false echoes.” The plane has released chaff—strips of black paper backed with aluminum foil and cut to half the target radar’s wavelength. Thrown out by the pound and then floating down through the air, they fill the radar screen with signals. The chaff has exactly met the conditions of data the radar is configured to look for, and has given it more “planes,” scattered all across the sky, than it can handle…That the chaff worked only briefly as it fluttered to the ground and was not a permanent solution wasn’t relevant under the circumstances. It only had to work well enough and long enough for the plane to get past the range of the radar.
The authors conclude: “Many forms of obfuscation work best as time-buying ‘throw-away’ moves. They can get you only a few minutes, but sometimes a few minutes is all the time you need.”
The book Obfuscation appeared almost exactly a year ago, but its argument takes on an additional resonance now, when the level of noise in our politics has risen to a degree that makes the culture wars of the past seem positively quaint. It can largely, but not entirely, be attributed to just one man, and there’s an ongoing debate over whether Trump’s use of the rhetorical equivalent of chaff is instinctive, like a squid squirting ink at its enemies, or a deliberate strategy. I tend to see it as the former, but that doesn’t mean that his impulsiveness can’t product the same result—and perhaps even more effectively—as a considered program of disinformation and distraction. What really scares me is the prospect of such tricks becoming channeled and institutionalized the hands of more capable surrogates, as soon as an “endlessly exploitable situation” comes to pass. In The Art of Political Murder, Goldman sums up “the seemingly irresistible logic behind so much of the suspicion, speculation, and tendentiousness” that enveloped the bishop’s death: “Something like this can seem to have a connection to a crime like that.” All you need is an event that produces a flood of data that can be assembled in any number of ways by selectively emphasizing certain connections while deemphasizing others. The great example here is the Kennedy assassination, which generated an unbelievable amount of raw ore for obsessive personalities to sift, like a bin of tesserae that could be put together into any mosaic imaginable. Compiling huge masses of documentation and testimony and placing it before the public is generally something that we only see in a governmental investigation, which has the time and resources to accumulate the information that will inevitably be used to undermine its own conclusions.
At the moment, there’s one obvious scenario in which this precise situation could arise. I’ve often found myself thinking of Robert Mueller in much the way that Quinta Jurecic of the Washington Post characterizes him in an opinion piece starkly titled “Robert Mueller Can’t Save Us”: “In the American imagination, Mueller is more than Trump’s adversary or the man who happens to be investigating him. He’s the president’s mythic opposite—the anti-Trump…Mueller is an avatar of our hope that justice and meaning will reassert themselves against Trumpian insincerity.” And I frequently console myself with the image of the Mueller investigation as a kind of bucket in which every passing outrage, briefly flaring up in the media only to be obscured by its successors, is filed away for later reckoning. As Jurecic points out, these hopes are misplaced:
There’s no way of knowing how long [Mueller’s] investigation will take and what it will turn up. It could be years before the probe is completed. It could be that Mueller’s team finds no evidence of criminal misconduct on the part of the president himself. And because the special counsel has no obligation to report his conclusions to the public—indeed, the special-counsel regulations do not give him the power to do so without the approval of Deputy Attorney General Rod J. Rosenstein—we may never know what he uncovers.
She’s right, but she also misses what I think is the most frightening possibility of all, which is that the Russia investigation will provide the exact combination of factors—“too many witnesses and testimonials, too many possible stories”—to create the situation that Goldman described in Guatemala. It’s hard to imagine a better breeding ground for conspiracy theories, alternate narratives, and false connections, and the likely purveyors are already practicing on a smaller scale. The Mueller investigation is necessary and important. But it will also provide the artists of obfuscation with the materials to paint their masterpiece.
The X factor
On Wednesday, the Washington Post published an article on the absence of women on the writing staff of The X-Files. Its author, Sonia Rao, pointed out that all of the writers for the upcoming eleventh season—including creator Chris Carter, Darin Morgan, Glen Morgan, James Wong, and three newcomers who had worked on the series as assistants—are men, adding: “It’s an industry tradition for television writers to rise through the ranks in this manner, so Carter’s choices were to be expected. But in 2017, it’s worth asking: How is there a major network drama that’s so dominated by male voices?” It’s a good question. The network didn’t comment, but Gillian Anderson responded on Twitter: “I too look forward to the day when the numbers are different.” In the same tweet, she noted that out of over two hundred episodes, only two were directed by women, one of whom was Anderson herself. (The other was Michelle MacLaren, who has since gone on to great things in partnership with Vince Gilligan.) Not surprisingly, there was also a distinct lack of female writers on the show’s original run, with just a few episodes written by women, including Anderson, Sara B. Cooper, and Kim Newton, the latter of whom, along with Darin Morgan, was responsible for one of my favorite installments, “Quagmire.” And you could argue that their continued scarcity is due to a kind of category selection, in which we tend to hire people who look like those who have filled similar roles in the past. It’s largely unconscious, but no less harmful, and I say this as a fan of a show that means more to me than just about any other television series in history.
I’ve often said elsewhere that Dana Scully might be my favorite fictional character in any medium, but I’m also operating from a skewed sample set. If you’re a lifelong fan of a show like The X-Files, you tend to repeatedly revisit your favorite episodes, but you probably never rewatch the ones that were mediocre or worse, which leads to an inevitable distortion. My picture of Scully is constructed out of four great Darin Morgan episodes, a tiny slice of the mytharc, and a dozen standout casefiles like “Pusher” and even “Triangle.” I’ve watched each of these episodes countless times, so that’s the version of the series that I remember—but it isn’t necessarily the show that actually exists. A viewer who randomly tunes into a rerun on syndication is much more likely to see Scully on an average week than in “War of the Coprophages,” and in many episodes, unfortunately, she’s little more than a foil for her partner or a convenient victim to be rescued. (Darin Morgan, who understood Scully better than anyone, seems to have gravitated toward her in part out of his barely hidden contempt for Mulder.) Despite these flaws, Scully still came to mean the world to thousands of viewers, including young women whom she inspired to go into medicine and the sciences. Gillian Anderson herself is deeply conscious of this, and this seems to have contributed to her refreshing candor here, as well as on such related issues as the fact that she was initially offered half of David Duchovny’s salary to return. Anderson understands exactly how much she means to us, and she’s conducted herself accordingly.
The fact that the vast majority of the show’s episodes were written by men also seems to have fed into one of its least appealing qualities, which was how Scully’s body—and particularly her reproductive system—was repeatedly used as a plot point. Part of this was accidental: Anderson’s pregnancy had to be written into the second season, and the writers ended up with an abduction arc with a medical subtext that became hopelessly messy later on. It may not have been planned that way, any more than anything else on this show ever was, but it had the additional misfortune of being tethered to a conspiracy storyline for which it was expected to provide narrative clarity. After the third season, nobody could keep track of the players and their motivations, so Scully’s cancer and fertility issues were pressed into service as a kind of emotional index to the rest. These were pragmatic choices, but they were also oddly callous, especially as their dramatic returns continued to diminish. And in its use of a female character’s suffering to motivate a male protagonist, it was unfortunately ahead of the curve. When you imagine flipping the narrative so that Mulder, not Scully, was one whose body was under discussion, you see how unthinkable this would have been. It’s exactly the kind of unexamined notion that comes out of a roomful of writers who are all operating with the same assumptions. It isn’t just a matter of taste or respect, but of storytelling, and in retrospect, the show’s steady decline seems inseparable from the monotony of its creative voices.
And this might be the most damning argument of all. Even before the return of Twin Peaks reminded us of how good this sort of revival could be, the tenth season of The X-Files was a crushing disappointment. It had exactly one good episode, written, not coincidentally, by Darin Morgan, and featuring Scully at her sharpest and most joyous. Its one attempt at a new female character, despite the best efforts of Lauren Ambrose, was a frustrating misfire. Almost from the start, it was clear that Chris Carter didn’t have a secret plan for saving the show, and that he’d already used up all his ideas over the course of nine increasingly tenuous seasons. It’s tempting to say that the show had burned though all of its possible plotlines, but that’s ridiculous. This was a series that had all of science fiction, fantasy, and horror at its disposal, combined with the conspiracy thriller and the procedural, and it should have been inexhaustible. It wasn’t the show that got tired, but its writers. Opening up the staff to a more diverse set of talents would have gone a long way toward addressing this. (The history of science fiction is as good an illustration as any of the fact that diversity is good for everyone, not simply its obvious beneficiaries. Editors and showrunners who don’t promote it end up paying a creative price in the long run.) For a show about extreme possibilities, it settled for formula distressingly often, and it would have benefited from adding a wider range of perspectives—particularly from writers with backgrounds that have historically offered insight into such matters as dealing with oppressive, impersonal institutions, which is what the show was allegedly about. It isn’t too late. But we might have to wait for the twelfth season.
The Berenstain Barrier
If you’ve spent any time online in the last few years, there’s a decent chance that you’ve come across some version of what I like to call the Berenstain Bears enigma. It’s based on the fact that a sizable number of readers who recall this book series from childhood remember the name of its titular family as “Berenstein,” when in reality, as a glance at any of the covers will reveal, it’s “Berenstain.” As far as mass instances of misremembering are concerned, this isn’t particularly surprising, and certainly less bewildering than the Mandela effect, or the similar confusion surrounding a nonexistent movie named Shazam. But enough people have been perplexed by it to inspire speculation that these false memories may be the result of an errant time traveler, à la Asimov’s The End of Eternity, or an event in which some of us crossed over from an alternate universe in which the “Berenstein” spelling was correct. (If the theory had emerged a few decades earlier, Robert Anton Wilson might have devoted a page or two to it in Cosmic Trigger.) Even if we explain it as an understandable, if widespread, mistake, it stands as a reminder of how an assumption absorbed in childhood remains far more powerful than a falsehood learned later on. If we discover that we’ve been mispronouncing, say, “Steve Buscemi” for all this time, we aren’t likely to take it as evidence that we’ve ended up in another dimension, but the further back you go, the more ingrained such impressions become. It’s hard to unlearn something that we’ve believed since we were children—which indicates how difficult it can be to discard the more insidious beliefs that some of us are taught from the cradle.
But if the Berenstain Bears enigma has proven to be unusually persistent, I suspect that it’s because many of us really are remembering different versions of this franchise, even if we believe that we aren’t. (You could almost take it as a version of Hilary Putnam’s Twin Earth thought experiment, which asks if the word “water” means the same thing to us and to the inhabitants of an otherwise identical planet covered with a similar but different liquid.) As I’ve recently discovered while reading the books aloud to my daughter, the characters originally created by Stan and Jan Berenstain have gone through at least six distinct incarnations, and your understanding of what this series “is” largely depends on when you initially encountered it. The earliest books, like The Bike Lesson or The Bears’ Vacation, were funny rhymed stories in the Beginner Book style in which Papa Bear injures himself in various ways while trying to teach Small Bear a lesson. They were followed by moody, impressionistic works like Bears in the Night and The Spooky Old Tree, in which the younger bears venture out alone into the dark and return safely home after a succession of evocative set pieces. Then came big educational books like The Bears’ Almanac and The Bears’ Nature Guide, my own favorites growing up, which dispensed scientific facts in an inviting, oversized format. There was a brief detour through stories like The Berenstain Bears and the Missing Dinosaur Bone, which returned to the Beginner Book format but lacked the casually violent gags of the earlier installments. Next came perhaps the most famous period, with dozens of books like Trouble With Money and Too Much TV, all written, for the first time, in prose, and ending with a tidy, if secular, moral. Finally, and jarringly, there was an abrupt swerve into Christianity, with titles like God Loves You and The Berenstain Bears Go to Sunday School.
To some extent, you can chalk this up to the noise—and sometimes the degeneration—that afflicts any series that lasts for half a century. Incremental changes can lead to radical shifts in style and tone, and they only become obvious over time. (Peanuts is the classic example, but you can even see it in the likes of Dennis the Menace and The Family Circus, both of which were startlingly funny and beautifully drawn in their early years.) Fashions in publishing can drive an author’s choices, which accounts for the ups and downs of many a long career. And the bears only found Jesus after Mike Berenstain took over the franchise after the deaths of his parents. Yet many critics don’t bother making these distinctions, and the ones who hate the Berenstain Bears books seem to associate them entirely with the Trouble With Money period. In 2005, for instance, Paul Farhi of the Washington Post wrote:
The larger questions about the popularity of the Berenstain Bears are more troubling: Is this what we really want from children’s books in the first place, a world filled with scares and neuroses and problems to be toughed out and solved? And if it is, aren’t the Berenstain Bears simply teaching to the test, providing a lesson to be spit back, rather than one lived and understood and embraced? Where is the warmth, the spirit of discovery and imagination in Bear Country? Stan Berenstain taught a million lessons to children, but subtlety and plain old joy weren’t among them.
Similarly, after Jan Berenstain died, Hanna Rosin of Slate said: “As any right-thinking mother will agree, good riddance. Among my set of mothers the series is known mostly as the one that makes us dread the bedtime routine the most.”
Which only tells me that neither Farhi or Rosin ever saw The Spooky Old Tree, which is a minor masterpiece—quirky, atmospheric, gorgeously rendered, and utterly without any lesson. It’s a book that I look forward to reading with my daughter. And while it may seem strange to dwell so much on these bears, it gets at a larger point about the pitfalls in judging any body of work by looking at a random sampling. I think that Peanuts is one of the great artistic achievements of the twentieth century, but it would be hard to convince anyone who was only familiar with its last two decades. You can see the same thing happening with The Simpsons, a series with six perfect seasons that threaten to be overwhelmed by the mediocre decades that are crowding the rest out of syndication. And the transformations of the Berenstain Bears are nothing compared to those of Robert A. Heinlein, whose career somehow encompassed Beyond This Horizon, Have Spacesuit—Will Travel, Starship Troopers, Stranger in a Strange Land, and I Will Fear No Evil. Yet there are also risks in drawing conclusions from the entirety of an artist’s output. In his biography of Anthony Burgess, Roger Lewis notes that he has read through all of Burgess’s work, and he asks parenthetically: “And how many have done that—except me?” He’s got a point. Trying to internalize everything, especially over a short period of time, can provide as false a picture as any subset of the whole, and it can result in a pattern that not even the author or the most devoted fan would recognize. Whether or not we’re from different universes, my idea of Bear Country isn’t the same as yours. That’s true of any artist’s work, and it hints at the problem at the root of all criticism: What do we talk about when we talk about the Berenstain Bears?
The seductiveness of sources
If you’re intrigued by public literary implosions, you’ve probably heard about the storm swirling around the legendary journalist Gay Talese, who has disavowed his new book The Voyeur’s Motel after discrepancies in its version of events were raised by the Washington Post. The book, which was given an eye-catching excerpt back in April in The New Yorker, revolves around the seedy figure of Gerald Foos, a motel owner in Aurora, Colorado who spied on his guests for more than three decades using a voyeur’s platform installed in the ceiling. Foos kept a detailed journal of his observations that served as Talese’s primary source, along with interviews and personal meetings with his subject. (At one point, Talese joined him in the attic, where the two men observed a sexual encounter between a pair of guests.) Yet according to the Post, much of the material in Foos’s diary is contradicted by public records. Among other issues, Foos sold the hotel in 1980 and didn’t regain ownership of it for another eight years, a period in which he claims to have continued his voyeuristic activities. When Talese was told about this, his response was unequivocal: “I’m not going to promote this book. How dare I promote it when its credibility is down the toilet?” And no matter how you slice it, it’s an embarrassing episode for a journalist who has deservedly been acclaimed for most of his career as one of our leading writers of nonfiction.
Even in the earliest days of their relationship, Talese had reason to doubt his subject’s reliability. Foos had written Talese a letter on January 7, 1980, offering to contribute “important information” to Talese’s work on the sex lives of contemporary Americans. Talese wondered from the beginning if Foos might be “a simple fabulist,” and when they met, Foos seemed unusually eager to talk about his experiences: Talese says that he’d never met anyone who had unburdened himself so quickly, which certainly feels like a warning sign. After Talese began to read the journal itself, he became aware of possible inconsistencies: the first entries, for instance, are dated three years before Foos actually bought the hotel. As time went on, the diary grew more novelistic in tone, with Foos referring to himself in the third person as “the voyeur,” and some of the incidents that it recounts seem an awful lot like his fantasies of what he wished he could have been. Talese was unable to corroborate Foos’s startling claim that he had witnessed a murder committed by one of the guests, which had suspicious parallels to an unrelated killing that occurred elsewhere around the same time, and he admits: “If I had not seen the attic viewing platform with my own eyes, I would have found it hard to believe Foos’s account.” But he trusted him enough to build much of his book around excerpts from the journal, for which Foos was paid by the publisher, although Talese hedges: “I cannot vouch for every detail that he recounts in his manuscript.”
So why was he taken in for so long? That’s a question that only Talese can answer, but I have a hunch that it had something to do with the seductiveness of Foos’s journal, even more than with the man himself. Talese writes:
My interest in him was not dependent on having access to his attic. I was hoping to get his permission to read the hundreds of pages that he claimed to have written during the past fifteen years, with the result that he would one day allow me to write about him…I hoped that Foo’s manuscript, if I obtained permission to read it, would serve as a kind of sequel to [the Victorian erotic memoir] My Secret Life.
At their initial meeting, Foos pulled out a cardboard box containing the manuscript, which was four inches thick, written on ruled sheets from legal pads with “excellent” penmanship. Foos agreed to send the diary in installments, copying a few pages at a time at his local library to avoid attracting attention. Later, he sent a full typescript of three hundred pages that included material that Talese hadn’t seen before. And it seems likely that Talese was inclined to believe his source, despite all his doubts, because the material seemed so sensational—and it was all right there, in his hands, in a form that could easily be shaped into a compelling book. A manuscript can’t refuse to talk.
I understand this, because I know from experience how the mere existence of a valuable primary source can influence how a nonfiction book is constructed, or even the choice of a subject itself. It’s so hard to obtain enough of this kind of information that when a trove of it falls into your lap, you naturally want to make use of it. I don’t think I would have pitched my book Astounding if I hadn’t known that tens of thousands of pages of documentary material were available for my four subjects: I knew that it was a book that I could write if I just invested enough time and care into the project, which isn’t always the case. But it’s also necessary to balance that natural eagerness with an awareness of the limitations of the evidence. Even if outright fabrication isn’t involved, memories are imprecise, and even straightforward facts need to be verified. (Numerous sources, for example, say that John W. Campbell graduated with a bachelor’s degree from Duke University in 1932, but Campbell himself once said that it was 1933, and he’s listed as a member of the class of 1935. So which is it? According to official records at Duke, it’s none of the above: Campbell attended classes from September 1932 through May 1934, a seemingly small detail, but one that plays a key role in reconstructing the chronology of that period in his life.) All you can do is check your sources, note when you’re relying on subjective accounts, and do everything in your power to get it right. Talese didn’t do it here—which reminds us that even the best writers can find themselves seduced and abandoned.