Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Posts Tagged ‘Buzzfeed

The twilight of the skeptics

with 3 comments

A few years ago, I was working on an idea for a story—still unrealized—that required a sidelong look at the problem of free will. As part of my research, I picked up a copy of the slim book of the same name by the prominent skeptic Sam Harris. At the time, I don’t think I’d even heard of Harris, and I was expecting little more than a readable overview. What I remember about it the most, though, is how it began. After a short opening paragraph about the importance of his subject, Harris writes:

In the early morning of July 23, 2007, Steven Hayes and Joshua Komisarjevsky, two career criminals, arrived at the home of Dr. William and Jennifer Petit in Cheshire, a quiet town in central Connecticut. They found Dr. Petit asleep on a sofa in the sunroom. According to his taped confession, Komisarjevsky stood over the sleeping man for some minutes, hesitating, before striking him in the head with a baseball bat. He claimed that his victim’s screams then triggered something within him, and he bludgeoned Petit with all his strength until he fell silent.

Harris goes on to provide a graphically detailed account, which I’m not going to retype here, of the sexual assault and murder of Petit’s wife and two daughters. Two full pages are devoted to it, in a book that is less than a hundred pages long, and only at the end does Harris come to the point: “As sickening as I find their behavior, I have to admit that if I were to trade places with one of these men, atom for atom, I would be him: there is no extra part of me that could decide to see the world differently or resist the impulse to victimize other people.”

I see what Harris is trying to say here, and I don’t think that he’s even wrong. Yet his choice of example—a horrifying crime that was less than five years old when he wrote Free Will, which the surviving victim, William Petit, might well have read—bothered me a lot. It struck me as a lapse of judgment, or at least of good taste, and it remains the one thing that I really remember about the book. And I’m reminded of it now only because of an excellent article in Wired, “Sam Harris and the Myth of Perfectly Rational Thought,” that neatly lays out many of my old misgivings. The author, Robert Wright, documents multiple examples of his subject falling short of his professed standards, but he focuses on an exchange with the journalist Ezra Klein, whom Harris accused of engaging in “a really indissoluble kind of tribalism, which I keep calling identity politics.” When Klein pointed out that this might be a form of tribal thinking in itself, Harris replied: “I know I’m not thinking tribally.” Wright continues:

Reflecting on his debate with Klein, Harris said that his own followers care “massively about following the logic of a conversation” and probe his arguments for signs of weakness, whereas Klein’s followers have more primitive concerns: “Are you making political points that are massaging the outraged parts of our brains? Do you have your hands on our amygdala and are you pushing the right buttons?”

Just a few years earlier, however, Harris didn’t have any qualms about pushing the reader’s buttons by devoting the first two pages of Free Will to an account of a recent, real-life home invasion that involved unspeakable acts of sexual violence against women—when literally any other example of human behavior, good or bad, would have served his purposes equally well.

Harris denies the existence of free will entirely, so perhaps he would argue that he didn’t have a choice when he wrote those words. More likely, he would say that the use of this particular example was entirely deliberate, because he was trying to make a point by citing most extreme case of deviant behavior that he could imagine. Yet it’s the placement, as much as the content, that gives me pause. Harris puts it right up front, at the place where most books try for a narrative or argumentative hook, which suggests two possible motivations. One is that he saw it as a great “grabber” opening, and he opportunistically used it for no other reason than to seize the reader’s attention, only to never mention it ever again. This would be bad enough, particularly for a man who claims to disdain anything so undignified as an appeal to the amygdala, and it strikes me as slightly unscrupulous, in that it literally indicates a lack of scruples. (I’ll have more to say about this word later.) Yet there’s an even more troubling possibility that didn’t occur to me at the time. Harris’s exploitation of these murders, and the unceremonious way in which he moves on, is a signal to the reader. This is the kind of book that you’re getting, it tells us, and if you can’t handle it, you should close it now and walk away. In itself, this amounts to false advertising—the rest of Free Will isn’t much like this at all, even if Harris is implicitly playing to the sort of person who hopes that it might be. More to the point, the callousness of the example probably repelled many readers who didn’t appreciate the rhetorical deployment, without warning, of a recent rape and multiple murder. I was one of them. But I also suspect that many women who picked up the book were just as repulsed. And Harris doesn’t seem to have been overly concerned about this possibility.

Yet maybe he should have been. Wright’s article in Wired includes a discussion of the allegations against the physicist and science writer Lawrence Krauss, who has exhibited a pattern of sexual misconduct convincingly documented by an article in Buzzfeed. Krauss is a prominent member of the skeptical community, as well as friendly toward Harris, who stated after the piece appeared: “Buzzfeed is on the continuum of journalistic integrity and unscrupulousness somewhere toward the unscrupulous side.” Whether or not the site is any less scrupulous than a writer who would use the sexual assault and murder of three women as the opening hook—and nothing else—in his little philosophy book is possibly beside the point. More relevant is the fact that, as Wright puts it, Harris’s characterization of the story’s source “isn’t true in any relevant sense.” Buzzfeed does real journalism, and the article about Krauss is as thoroughly reported and sourced as the most reputable investigations into any number of other public figures. With his blanket dismissal, Harris doesn’t sound much like a man who cares “massively” about logic or rationality. (Neither did Krauss, for that matter, when he said last year in the face of all evidence: “Science itself overcomes misogyny and prejudice and bias. It’s built in.”) But he has good reason to be uneasy. The article in Buzzfeed isn’t just about Krauss, but about the culture of behavior within the skeptical community itself:

What’s particularly infuriating, said Lydia Allan, the former cohost of the Dogma Debate podcast, is when male skeptics ask how they could draw more women into their circles. “I don’t know, maybe not put your hands all over us? That might work,” she said sarcastically. “How about you believe us when we tell you that shit happens to us?”

Having just read the first two pages of Free Will again, I can think of another way, too. But that’s probably just my amygdala talking.

Written by nevalalee

May 21, 2018 at 9:38 am

The war of ideas

with 2 comments

Over the last few days, I’ve been thinking a lot about a pair of tweets. One is from Susan Hennessy, an editor for the national security blog Lawfare, who wrote: “Much of my education has been about grasping nuance, shades of gray. Resisting the urge to oversimplify the complexity of human motivation. This year has taught me that, actually, a lot of what really matters comes down to good people and bad people. And these are bad people.” This is a remarkable statement, and in some ways a heartbreaking one, but I can’t disagree with it, and it reflects a growing trend among journalists and other commentators to simply call what we’re seeing by its name. In response to the lies about the students of Marjory Stoneman Douglas High School—including the accusation that some of them are actors—Margaret Sullivan of the Washington Post wrote:

When people act like cretins, should they be ignored? Does talking about their misdeeds merely give them oxygen? Maybe so. But the sliming—there is no other word for it—of the survivors of last week’s Florida high school massacre is beyond the pale…Legitimate disagreement over policy issues is one thing. Lies, conspiracy theories and insults are quite another.

And Paul Krugman went even further: “America in 2018 is not a place where we can disagree without being disagreeable, where there are good people and good ideas on both sides, or whatever other bipartisan homily you want to recite. We are, instead, living in a kakistocracy, a nation ruled by the worst, and we need to face up to that unpleasant reality.”

The other tweet that has been weighing on my mind was from Rob Goldman, a vice president of advertising for Facebook. It was just one of a series of thoughts—which is an important detail in itself—that he tweeted out on the day that Robert Mueller indicted thirteen Russian nationals for their roles in interfering in the presidential election. After proclaiming that he was “very excited” to see the indictments, Goldman said that he wanted to clear up a few points. He had seen “all of the Russian ads” that appeared on Facebook, and he stated: “I can say very definitively that swaying the election was not the main goal.” But his most memorable words, at least for me, were: “The majority of the Russian ad spend happened after the election. We shared that fact, but very few outlets have covered it because it doesn’t align with the main media narrative of Tump [sic] and the election.” This is an astounding statement, in part because it seems to defend Facebook by saying that it kept running these ads for longer than most people assume. But it’s also inexplicable. It may well be, as some observers have contended, that Goldman had a “nuanced” point to make, but he chose to express it on a forum that is uniquely vulnerable to being taken out of context, and to unthinkingly use language that was liable to be misinterpreted. As Josh Marshall wrote:

[Goldman] even apes what amounts to quasi-Trumpian rhetoric in saying the media distorts the story because the facts “don’t align with the main media narrative of Trump and the election.” This is silly. Elections are a big deal. It’s hardly surprising that people would focus on the election, even though it’s continued since. What is this about exactly? Is Goldman some kind of hardcore Trumper?

I don’t think he is. But it also doesn’t matter, at least not when his thoughts were retweeted approvingly by the president himself.

This all leads me to a point that the events of the last week have only clarified. We’re living in a world in which the lines between right and wrong seem more starkly drawn than ever, with anger and distrust rising to an unbearable degree on both sides. From where I stand, it’s very hard for me to see how we recover from this. When you can accurately say that the United States has become a kakistocracy, you can’t just go back to the way things used to be. Whatever the outcome of the next election, the political landscape has been altered in ways that would have been unthinkable even two years ago, and I can’t see it changing during my lifetime. But even though the stakes seem clear, the answer isn’t less nuance, but more. If there’s one big takeaway from the last eighteen months, it’s that the line between seemingly moderate Republicans and Donald Trump was so evanescent that it took only the gentlest of breaths to blow it away. It suggests that we were closer to the precipice than we ever suspected, and unpacking that situation—and its implications for the future—requires more nuance than most forms of social media can provide. Rob Goldman, who should have known better, didn’t grasp this. And while I hope that the students at Marjory Stoneman Douglas do better, I also worry about how effective they can really be. Charlie Warzel of Buzzfeed recently argued that the pro-Trump media has met its match in the Parkland students: “It chose a political enemy effectively born onto the internet and innately capable of waging an information war.” I want to believe this. But it may also be that these aren’t the weapons that we need. The information war is real, but the only way to win it may be to move it into another battlefield entirely.

Which brings us, in a curious way, back to Robert Mueller, who seems to have assumed the same role for many progressives that Nate Silver once occupied—the one man who was somehow going to tell us that everything was going to be fine. But their differences are also telling. Silver generated reams of commentary, but his reputation ultimately came down to his ability to provide a single number, updated in real time, that would indicate how worried we had to be. That trust is clearly gone, and his fall from grace is less about his own mistakes than it’s an overdue reckoning for the promises of data journalism in general. Mueller, by contrast, does everything in private, avoids the spotlight, and emerges every few months with a mountain of new material that we didn’t even know existed. It’s nuanced, qualitative, and not easy to summarize. As the coverage endlessly reminds us, we don’t know what else the investigation will find, but that’s part of the point. At a time in which controversies seem to erupt overnight, dominate the conversation for a day, and then yield to the next morning’s outrage, Mueller embodies the almost anachronistic notion that the way to make something stick is to work on it diligently, far from the public eye, and release each piece only when you’re ready. (In the words of a proverbial saying attributed to everyone from Buckminster Fuller to Michael Schrage: “Never show fools unfinished work.” And we’re all fools these days.) I picture him fondly as the head of a monastery in the Dark Ages, laboriously preserving information for the future, or even as the shadowy overseer of Asimov’s Foundation. Mueller’s low profile allows him to mean whatever we want to us, of course, and for all I know, he may not be the embodiment of all the virtues that Ralph Waldo Emerson identified as punctuality, personal attention, courage, and thoroughness. I just know that he’s the only one left who might be. Mueller can’t save us by himself. But his example might just show us the way.

The Uber Achievers

leave a comment »

In 1997, the computer scientist Niklaus Wirth, best known as the creator of Pascal, conducted a fascinating interview with the magazine Software Development, which I’ve quoted here before. When asked if it would be better to design programming languages with “human issues” in mind, Wirth replied:

Software development is technical activity conducted by human beings. It is no secret that human beings suffer from imperfection, limited reliability, and impatience—among other things. Add to it that they have become demanding, which leads to the request for rapid, high performance in return for the requested high salaries. Work under constant time pressure, however, results in unsatisfactory, faulty products.

When I read this quotation now, I think of Uber. As a recent story by Caroline O’Donovan and Priya Anand of Buzzfeed makes clear, the company that seems to have alienated just about everyone in the world didn’t draw the line at its own staff: “Working seven days a week, sometimes until 1 or 2 a.m., was considered normal, said one employee. Another recalled her manager telling her that spending seventy to eighty hours a week in the office was simply ‘how Uber works.’ Someone else recalled working eighty to one hundred hours a week.” One engineer, who is now in therapy, recalled: “It’s pretty clear that giving that much of yourself to any one thing is not healthy. There were days where I’d wake up, shower, go to work, work until midnight or so, get a free ride home, sleep six hours, and go back to work. And I’d do that for a whole week.”

“I feel so broken and dead,” one employee concluded. But while Uber’s internal culture was undoubtedly bad for morale, it might seem hard at first to make the case that the result was an “unsatisfactory, faulty” product. As a source quoted in the article notes, stress at the company led to occasional errors: “If you’ve been woken up at 3 a.m. for the last five days, and you’re only sleeping three to four hours a day, and you make a mistake, how much at fault are you, really?” Yet the Uber app itself is undeniably elegant and reliable, and the service that it provided is astonishingly useful—if it weren’t, we probably wouldn’t even be talking about it now. When we look at what else Wirth says, though, the picture becomes more complicated. All italics in the following are mine:

Generally, the hope is that corrections will not only be easy, because software is immaterial, but that the customers will be willing to share the cost. We know of much better ways to design software than is common practice, but they are rarely followed. I know of a particular, very large software producer that explicitly assumes that design takes twenty percent of developers’ time, and debugging takes eighty percent. Although internal advocates of an eighty percent design time versus twenty percent debugging time have not only proven that their ratio is realistic, but also that it would improve the company’s tarnished image. Why, then, is the twenty-percent design time approach preferred? Because with twenty-percent design time your product is on the market earlier than that of a competitor consuming eighty-percent design time. And surveys show that the customer at large considers a shaky but early product as more attractive than a later product, even if it is stable and mature.

This description applies perfectly to Uber, as long as we remember that its “product” isn’t bounded by its app alone, but extends to its impact on drivers, employees, competitors, and the larger community in which it exists—or what an economist would call its externalities. Taken as a closed system, the Uber experience is perfect, but only because it pushes its problems outside the bounds of the ride itself. When you look at the long list of individuals and groups that its policies have harmed, you discern the outlines of its true product, which can be described as the system of interactions between the Uber app and the world. You could say this of most kinds of software, but it’s particularly stark for a service that is tied to the problem of physically moving its customers from one point to another on the earth’s surface. By that standard, “shaky but early” describes Uber beautifully. It certainly isn’t “stable and mature.” The company expanded to monstrous proportions before basic logistical, political, and legal matters had been resolved, and it acted as if it could simply bull its way through any obstacles. (Its core values, let’s not forget, included “stepping on toes” and “principled confrontation.”) Up to a point, it worked, but something had to give, and economic logic dictated that the stress fall on the human factor, which was presumably resilient enough to absorb punishment from the design and technology sides. One of the most striking quotes in the Buzzfeed article comes from Uber’s chief human resources officer: “Many employees are very tired from working very, very hard as the company grew. Resources were tight and the growth was such that we could never hire sufficiently, quickly enough, in order to keep up with the growth.” To assert that “resources were tight” at the most valuable startup on the planet seems like a contradiction in terms, and it would be more accurate to say that Uber decided to channel massive amounts of capital in certain directions while neglecting those that it cynically thought could take it.

But it was also right, until it wasn’t. Human beings are extraordinarily resilient, as long as you can convince them to push themselves past the limits of their ability, or at least to do work at rates that you can afford. In the end, they burn out, but there are ways to postpone that moment or render it irrelevant. When it came to its drivers, Uber benefited from a huge pool of potential contractors, which made turnover a statistical, rather than an individual, problem. With its corporate staff and engineers, there was always the power of money, in the form of equity in the company, to persuade people to stay long past the point where they would have otherwise quit. The firm gambled that it would lure in plenty of qualified hires willing to trade away their twenties for the possibility of future wealth, and it did. (As the Buzzfeed article reveals, Uber seems to have approached compensation for its contractors and employees in basically the same way: “Uber acknowledges that it pays less than some of its top competitors for talent…The Information reported that Uber uses an algorithm to estimate the lowest possible compensation employees will take in order to keep labor costs down.”) When the whole system finally failed, it collapsed spectacularly, and it might help to think of Uber’s implosion, which unfolded over less than six months, as a software crash, with bugs that were ignored or patched cascading in a chain reaction that brings down the entire program. And the underlying factor wasn’t just a poisonous corporate culture or the personality of its founder, but the sensibility that Wirth identified two decades ago, as a company rushed to get a flawed idea to market on the assumption that consumers—or society as a whole—would bear the costs of correcting it. As Wirth asks: “Who is to blame for this state of affairs? The programmer turned hacker; the manager under time pressure; the business man compelled to extol profit wherever possible; or the customer believing in promised miracles?”

Written by nevalalee

July 20, 2017 at 8:29 am

Is this post an example of Betteridge’s Law?

leave a comment »

Article in The New York Times

Yesterday, I was browsing The A.V. Club when I came across the following clunky headline: “Could Guardians of the Galaxy be worthy of the coveted Firefly comparison?” I only skimmed the article itself, which asks, in case you were wondering, if the Guardians of the Galaxy animated series could be “the next Firefly“—a matter on which I don’t have much of an opinion one way or the other. But my attention was caught by one of the reader comments in response, which invoked Betteridge’s Law of Headlines: “Any headline that ends in a question mark can be answered by the word ‘no.'” Needless to say, this is a very useful rule. In its current form, it was set forth by the technology writer Ian Betteridge in response to the TechCrunch headline “Did Just Hand Over User Listening Data to the RIAA?” Betteridge wrote:

This story is a great demonstration of my maxim that any headline which ends in a question mark can be answered by the word “no.” The reason why journalists use that style of headline is that they know the story is probably bullshit, and don’t actually have the sources and facts to back it up, but still want to run it. Which, of course, is why it’s so common in the Daily Mail.

Betteridge may have given the rule its most familiar name, but it’s actually much older. It pops up here and there in collections of Murphy’s Law and its variants, and among academics, it’s best known as Hinchliffe’s Rule, attributed—perhaps apocryphally—to the physicist Ian Hinchliffe, which states: “If the title of a scholarly article is a yes or no question, the answer is ‘no.'” (This recently led the Harvard University computer scientist Stuart M. Shieber to publish a scholarly article titled “Is This Article Consistent with Hinchliffe’s Rule?” The answer is no, but only if the answer is yes.) In his book My Trade, the British newspaper editor Andrew Marr makes the same point more forcefully:

If the headline asks a question, try answering “no.” Is This the True Face of Britain’s Young? (Sensible reader: No.) Have We Found the Cure for AIDS? (No; or you wouldn’t have put the question mark in.) Does This Map Provide the Key for Peace? (Probably not.) A headline with a question mark at the end means, in the vast majority of cases, that the story is tendentious or over-sold. It is often a scare story, or an attempt to elevate some run-of-the-mill piece of reporting into a national controversy and, preferably, a national panic. To a busy journalist hunting for real information a question mark means “don’t bother reading this bit.”

Article from The Daily Mail

What I find most interesting about Betteridge’s version of the rule is his last line: “Which, of course, is why it’s so common in the Daily Mail.” This implies that the rule can be used not just to identify unreliable articles, but to characterize publications as a whole. As I write this, for instance, three headlines on the New York Times home page run afoul of it: “Is Valeant Pharmaceuticals the Next Enron?” “Has Diversity Lost Its Meaning?” “Are Flip Phones Having a Retro Chic Moment?” (There are a few more that technically sprout question marks but don’t quite fit the rubric, such as “Should You Be Watching Supergirl?”) The Daily Mail site, by contrast, has five times as many, and most of them fall neatly into the Betteridge category, including my favorite: “Does This Clip Show the Corpse of a Feared Chupacabra Vampire?” Buzzfeed, interestingly, doesn’t go for that headline format at all, and it only uses question marks to signify its famous personality quizzes: “Are You More Like Adele or Beyoncé?” This implies that a headline phrased in the form of a question might not be especially good at attracting eyeballs: Buzzfeed, which has refined clickbait into an art form, would surely use it more often if it worked. Most likely, as both Betteridge and Marr imply, it’s a way out for journalists who want to publish a story, but aren’t ready to stand behind it entirely. If anyone objects, they can always say that they were just raising the issue for further discussion.

But most readers, I suspect, can intuitively sense the difference. Headlines like this have always reminded me of “The End?” at the close of Manos: The Hands of Fate, to which Crow T. Robot replies: “Umm…Yes? No! I want to change my answer!” It might be instructive to conduct a study of whether or not they’ve increased in frequency over the last decade, as news cycles have grown ever more compressed and the need to generate think pieces on demand forces writers to crank out stories with a minimum of preparation. It’s hard to blame the reporters themselves, who are operating under conditions that actively discourage the kind of extended research process that would allow the question mark to be removed or the article to be dropped altogether. (And it’s worth noting that editors, not reporters, are the ones who write the headlines.) This isn’t to say that there can’t be good stories that sport headlines in the form of a question: like the Bechdel Test for movies, it’s less about criticizing individual works than making us more aware of the landscape. And given the choice, the question mark—which at least provides a visible red flag—is preferable to the exclamation point, literal or otherwise, that characterizes so much current content, from cable news on down. In that light, the question mark almost feels like a form of courtesy. And we have to learn to live with it, at least until good journalism, like the flip phone, experiences a retro chic moment of its own.

The tabloid touch

leave a comment »

In Touch Weekly

“Oh, I’m a hero. I was shot twice in the Tribune.”
“I read where you were shot five times in the tabloids.”
“It’s not true. He didn’t come anywhere near my tabloids!”

The Thin Man

A couple of weeks ago, Buzzfeed writer Anne Helen Petersen published a long article on In Touch Weekly and the evolution of the modern tabloid. It’s a fun piece, full of juicy insights, and it’s worth reading in its entirety. Yet what caught my eye the most were details like these:

In Touch piqued that fascination by manufacturing elaborate, multipart, melodramatic narratives—the stuff of soap operas…Several former employees remember [editor Richard] Spencer laying out a four-act cover drama for what would happen between Brad Pitt and Angelina Jolie at the beginning of each month—a pregnancy, for example, followed by a breakup scare, a reconciliation, and then marriage rumors.

The beats of the drama may have been fictionalized, but it was easy to find sources—including rival publicists, other celebrities, former friends, estranged family—to support the claims…It’s not that In Touch made things up, it’s that the publicist and family members and celebrities themselves did…

“For [editor David] Perel, each story is a chapter in a novel,” one recent staffer reported. “He decides on the narrative, then has his reporters work sources to match the narrative.”

And this is just a more blatant version of what every writer, nonfiction and otherwise, does on a regular basis, although not always so brazenly. When you’re writing a story, even if you’re a reputable journalist, you often find yourself selecting facts to bolster the thesis implied in the headline or first paragraph. In some cases, you go looking for a quote from an outside source to support a conclusion you’ve already reached, and fortunately for reporters, the world is full of people willing to supply quotable material on demand—which is why the same expert sources repeatedly crop up in business or pop culture reportage. It isn’t a question of bias, but of structuring a decent news story: even the most apparently objective articles provide a narrative that helps us fit facts into a pattern that we can use or enjoy. Not every story we tell about the world is equally accurate, of course, and thoughtful readers and viewers have long since learned to recognize false balance. What puts a tabloid like In Touch into a different category is how cheerfully it severs the link, already tenuous, between reality and the “sourced” stories it produces. And the punchline is that this approach can shade imperceptibly into real reporting, as we’ve seen recently with the magazine’s coverage of the Duggars.

In Touch Weekly

What makes tabloids so fascinating is that they display a funhouse version of a process that we’ve learned to accept unthinkingly from more legitimate forms of nonfiction. An article in yesterday’s New York Times interviewed a range of documentary filmmakers about the ways they shape their material, from turning on a television set to provide a source of lighting in a scene to gently coaching interview subjects to arrive at a deeper emotional truth. And choices about selection of footage, arrangement, juxtaposition, and chronology are central to the documentary form. Occasionally, as with The Jinx, these liberties are obvious enough to raise questions about accuracy. But in every case, filmmakers walk a fine line between fidelity to the facts and the structural judgment calls that every story requires. In theory, the only kind of documentary evidence that resists that kind of manipulation is a raw, unedited chunk of footage, but in practice—as we see, for instance, in the varying responses to the Eric Garner video—even an apparently unambiguous record can be colored by context, where the excerpt starts and ends, and the viewer’s own preconceptions. We’re all constantly editing reality to conform with the mental pictures we form of it; what sets apart a documentary, or journalism, is that this editing has been outsourced to someone else.

And we’ve implicitly agreed to a measure of editorial intervention as the price for having information delivered to use in a form we can absorb. I’ve spoken at length elsewhere about how relentlessly podcasts and radio journalism are shaped to retain the listener’s attention: “Anecdote then reflection, over and over,” as Ira Glass says, which means that we’re not just being given a story, but constantly being told what to think about it. Otherwise, the result would be boring or unintelligible, as we often see in podcasts that don’t consider their structure so insistently. Asking journalists and other writers to refrain from sculpting the material betrays a misunderstanding of how we all think and learn. Everything is subject to a point of view, even the unmediated experience of our own lives: the best we can do is be aware of it, skeptical when necessary, and selective about whom we trust. In Touch makes for a great case study because the bones are exposed for all to see, but even a tabloid headline can influence us in subtle ways, as Petersen notes:

A typical…cover promised to answer a question the reader didn’t even know he or she had: “What Went Wrong,” “Why It’s Over,” or “Why They Split.”

All storytelling poses and resolves such unconscious questions, which only makes it harder to distinguish from what we might be thinking on the inside. And when a story moves us, intrigues us, or makes us feel, we can truly say that it got us right in the tabloids.

Written by nevalalee

June 29, 2015 at 9:42 am

“Then the crowd rushed forward…”

leave a comment »

"Moving past the onlookers..."

Note: This post is the twenty-second installment in my author’s commentary for Eternal Empire, covering Chapter 23. You can read the previous installments here.

Last week, Buzzfeed ran a fun feature in which a few dozen television writers talked about the favorite thing they’d ever written. There’s a lot of good stuff here—I particularly liked Rob Thomas’s account of the original opening of Veronica Mars, which ended up on the cutting room floor—but the story that really stuck with me came courtesy of Damon Lindelof. At this point, Lindelof isn’t anyone’s favorite writer, but few would argue that the finale of the third season of Lost marked a high point in his career, with its closing revelation that what looked like a flashback was actually a scene from the future. It’s a fantastic mislead that viewers still talk about to this day, and the best part is what Lindelof acknowledges as his inspiration:

The final scene of “Through the Looking Glass”—the third season finale of Lost—was stolen from the movie Saw 2.

If you have not seen Saw 2, all you need to know is that Donnie Wahlberg is in it and that the twist at the end involves tricking the audience into thinking they’re watching something unfold in present time, when in fact, it is unfolding in the past. Also, Donnie Wahlberg is in it. Did I say that already?

I love this for two reasons. First, although I’ve never gotten around to seeing Saw 2, I’ve been impressed by its closing twist ever since it was first described to me: I think it would be discussed in the same breath as other great surprise endings if it didn’t reside in such a disreputable genre. (It’s also worth noting that it was originally written by Darren Lynn Bousman as an unrelated spec script, later retooled to serve as a Saw sequel. Bousman went on to direct the next three films in the franchise, which is a lesson in itself: if you come up with a great twist, it can give you a career.) Second, it’s a reminder that you can derive inspiration from almost anything, and that the germ of an idea is less meaningful than its execution. If Lindelof hadn’t spelled it out, it’s unlikely that many viewers would have made the connection. As I’ve noted here before, even a short description of someone else’s idea—as happened with the Doctor Who writer Russell T. Davies and the Star Trek: The Next Generation episode “Darmok”—can ignite a line of thought. And when it comes to drawing material from things you’ve seen, you often get better ideas from flawed efforts than from masterpieces. A great movie feels like the definitive version of its story; a misfire makes you think about the other ways in which it might have been done.

"Then the crowd rushed forward..."

For instance, I don’t know how many readers here remember a movie called Dark Blue. It was already a flop when it came out over a decade ago—I’m one of the few who paid to see it in theaters—and it doesn’t seem to have had much of an afterlife on video. Even I don’t remember much about it, although I think I liked it fine: it was a messy, textured cop movie with a nice lead performance from Kurt Russell, who is worth watching in anything. What attracted me to it, though, were two elements. It was based on an original story by James Ellroy, author of L.A. Confidential, and the idea of a sprawling, contemporary crime saga from Ellroy’s brain was an enticing one. And the premise itself grabbed my attention: a violent police melodrama set against the backdrop of the Los Angeles riots. (Apparently, Ellroy developed the idea for so long that it was originally set during the Watts riots, which says something in itself about the byways a screenplay can take in Hollywood.) In the end, the execution wasn’t quite memorable enough for it to stick in my head. But its core idea, of a plot that intersected unexpectedly with a historical riot in a big city, is one I never forgot. And years later, when the London riots in Hackney coincided with my planning for Eternal Empire, the pieces just fell into place.

And the result, in Chapter 23, is less an homage to Dark Blue than a kind of remake, filtered through the fuzziness of time, or my private dream of what such a scene could be. Since much of the appeal of a sequence like this comes from how closely it hews to actual events, I invested a lot of effort—maybe too much—in putting together a timeline of the riots and assembling visual references. Several moments in the scene essentially put Wolfe and Ilya in the middle of iconic photos and videos from that day. I had to fudge a few details to make it all fit: the prison break in the previous chapter takes place in early morning, so there’s a space of six hours or so in the chronology that is hard to account for. Still, it all hangs together pretty well, and the result is one of my favorite things in this novel. And what would Ellroy say? I’d like to think that he’d approve, or at least tolerate it, since he isn’t above much the same kind of creative liberation: he admits that he lifted the premise of his novel The Big Nowhere directly from the William Friedkin movie Cruising. (Which doesn’t even mention how much Dark Blue, and so many other movies in its genre, owes To Live and Die in L.A.) The cycle of appropriation goes ever on, and it’s a good thing. Until a book or movie executes an idea so expertly that it yanks it out of circulation, everything should be up for grabs. And in the meantime, all a writer can do is take it and run…

My Uber Ex

leave a comment »

Uber Apps

Yesterday, I canceled my Uber account. Many of you probably already know why, but on the off chance you don’t, I can only quote the original report on Buzzfeed:

A senior executive at Uber suggested that the company should consider hiring a team of opposition researchers to dig up dirt on its critics in the media—and specifically to spread details of the personal life of a female journalist who has criticized the company…

Over dinner, he outlined the notion of spending “a million dollars” to hire four top opposition researchers and four journalists. That team could, he said, help Uber fight back against the press—they’d look into “your personal lives, your families,” and give the media a taste of its own medicine.

[Senior vice president Emil] Michael was particularly focused on one journalist, Sarah Lacy, the editor of the Silicon Valley website PandoDaily, a sometimes combative voice inside the industry…Uber’s dirt-diggers, Michael said, could expose Lacy. They could, in particular, prove a particular and very specific claim about her personal life.

Since the story was first reported by Ben Smith, who personally witnessed the remarks, both Michael and Uber chief Travis Kalanick have publicly apologized. But I frankly don’t trust them. And while my feelings have more than a little to do with the fact that I’m a freelance writer married to a journalist who has covered Uber in the past, they’re also reflection of larger, more troubling questions that should concern more than just members of the press and their families.

First, I should go on the record as saying that I love the service that Uber provides. It’s convenient, cheap, and has the potential to change people’s lives for the better: I don’t think it’s an exaggeration to say that it’s the most innovative startup concept of the decade. If anything, though, this only makes the underlying point more stark, which is how toxic values—encouraged by the culture in which they emerge—can poison even great ideas. There’s the unstated assumption, for instance, that you can draw an equivalence between a reporter covering a corporation’s business practices and that same corporation “fighting back” by investigating the reporter’s personal life. This isn’t some theoretical consideration; there are documented cases of companies doing exactly this. What really startles me, though, is Uber’s lukewarm reaction. In a series of tweets issued in response to the report, Kalanick stated that Michael’s comments did not fairly represent the company, but he also made another curious statement: “His duties here at Uber do not involve communications strategy or plans.” This seems calculated to partially absolve Michael, or the company itself, but it only makes the lack of action harder to understand. Michael, who is not involved in communications strategy, made remarks that had an instant chilling effect on the very journalists on whom Uber depends for fair coverage, and he was careless enough to make them to Ben Smith, one of the most famous media figures in the country. As John Hodgman notes in a blog post on the same subject: “If this isn’t a fireable offense, are there any?”


The fact that Michael seems unlikely to face any additional consequences for his comments makes it clear that deep down, Uber itself doesn’t seem to think that they’re particularly offensive, and that this is just a fake controversy stirred up by reporters looking for attention. (If I had any doubt at all about this, I’d only have to reread the tweet from another Uber executive, since deleted, sharing a photo of employees celebrating after Kalanick’s apology, with the hashtag #HatersGonnaHate. That’s the moment I decided to cancel my account.) And this reflects a fundamental cultural problem. Last week, I noted that the unforgiving conditions of venture funding force startups to compress the process of testing new ideas into punishing, probably unsustainable timelines. Along with everything else, this kind of corporate Darwinism leads to a weirdly insecure, adversarial relationship with the media. In many cases, a company’s brand is all it has: money is raised for an idea that may be years away from delivery, and in the meantime, a reputation has to be spun out of nothing. Press coverage plays a huge part in shaping that narrative, so a startup with nothing but a sales pitch for an app is more likely to rage against a negative story than, say, General Electric, which has more important things to worry about. And when half of your perceived value as a company stems from what journalists have to say about you, it’s easy to conclude that if a reporter isn’t your friend, she’s your enemy.

I’d be tempted to say that it’s similar to how writers feel about critics, if it weren’t for the crucial difference that most writers don’t have the resources to bully or intimidate the critics they don’t like. And Uber stands only at one end of a continuum that extends way, way down to some of the ugliest recent developments in tech culture. If nothing else, they’re drawing from the same talent pool: the current startup market has evolved so that those who are most attracted to it are likely to share a common set of principles, including a sense that any criticism amounts to a personal attack, which can only have a freezing effect on innovation in the long run. Still, it isn’t difficult to see why Uber feels so threatened. Unlike most startups, they have a sensational core idea and tons of revenue, but their entire operation is predicated on trust. When their image suffers, that trust is diminished, and we’re less likely to order one of their cars. The fact that they maintain minimal infrastructure of their own, which is a big part of what makes them so exciting, exposes them to competition from shrewder rivals. And maybe they’re right to be worried. Enough customers have started canceling their accounts because of Michael’s comments that Uber has started to respond with a boilerplate email stating that his views don’t reflect that of the larger company. When I canceled my own account, I was halfway hoping to get the same reply. Instead, it only said: “Sorry to see you go!”

Written by nevalalee

November 20, 2014 at 10:17 am

%d bloggers like this: