Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Posts Tagged ‘The Visual Display of Quantitative Information

Subterranean fact check blues

leave a comment »

In Jon Ronson’s uneven but worthwhile book So You’ve Been Publicly Shamed, there’s a fascinating interview with Jonah Lehrer, the superstar science writer who was famously hung out to dry for a variety of scholarly misdeeds. His troubles began when a journalist named Michael C. Moynihan noticed that six quotes attributed to Bob Dylan in Lehrer’s Imagine appeared to have been fabricated. Looking back on this unhappy period, Lehrer blames “a toxic mixture of insecurity and ambition” that led him to take shortcuts—a possibility that occurred to many of us at the time—and concludes:

And then one day you get an email saying that there’s these…Dylan quotes, and they can’t be explained, and they can’t be found anywhere else, and you were too lazy, too stupid, to ever check. I can only wish, and I wish this profoundly, I’d had the temerity, the courage, to do a fact check on my last book. But as anyone who does a fact check knows, they’re not particularly fun things to go through. Your story gets a little flatter. You’re forced to grapple with all your mistakes, conscious and unconscious.

There are at least two striking points about this moment of introspection. One is that the decision whether or not to fact-check a book was left to the author himself, which feels like it’s the wrong way around, although it’s distressingly common. (“Temerity” also seems like exactly the wrong word here, but that’s another story.) The other is that Lehrer avoided asking someone to check his facts because he saw it as a painful, protracted process that obliged him to confront all the places where he had gone wrong.

He’s probably right. A fact check is useful in direct proportion to how much it hurts, and having just endured one recently for my article on L. Ron Hubbard—a subject on whom no amount of factual caution is excessive—I can testify that, as Lehrer says, it isn’t “particularly fun.” You’re asked to provide sources for countless tiny statements, and if you can’t find it in your notes, you just have to let it go, even if it kills you. (As far as I can recall, I had to omit exactly one sentence from the Hubbard piece, on a very minor point, and it still rankles me.) But there’s no doubt in my mind that it made the article better. Not only did it catch small errors that otherwise might have slipped into print, but it forced me to go back over every sentence from another angle, critically evaluating my argument and asking whether I was ready to stand by it. It wasn’t fun, but neither are most stages of writing, if you’re doing it right. In a couple of months, I’ll undergo much the same process with my book, as I prepare the endnotes and a bibliography, which is the equivalent of my present self performing a fact check on my past. This sort of scholarly apparatus might seem like a courtesy to the reader, and it is, but it’s also good for the book itself. Even Lehrer seems to recognize this, stating in his attempt at an apology in a keynote speech for the Knight Foundation:

If I’m lucky enough to write again, I won’t write a thing that isn’t fact-checked and fully footnoted. Because here is what I’ve learned: unless I’m willing to continually grapple with my failings—until I’m forced to fix my first draft, and deal with criticism of the second, and submit the final for a good, independent scrubbing—I won’t create anything worth keeping around.

For a writer whose entire brand is built around counterintuitive, surprising insights, this realization might seem bluntly obvious, but it only speaks to how resistant most writers, including me, are to any kind of criticism. We might take it better if we approached it with the notion that it isn’t simply for the sake of our readers, or our hypothetical critics, or even the integrity of the subject matter, but for ourselves. A footnote lurking in the back of the book makes for a better sentence on the page, if only because of the additional pass that it requires. It would help if we saw such standards—the avoidance of plagiarism, the proper citation of sources—not as guidelines imposed by authority from above, but as a set of best practices that well up from inside the work itself. A few days ago, there yet was another plagiarism controversy, which, in what Darin Morgan once called “one of those coincidences found only in real life and great fiction,” also involved Bob Dylan. As Andrea Pitzer of Slate recounts it:

During his official [Nobel] lecture recorded on June 4, laureate Bob Dylan described the influence on him of three literary works from his childhood: The Odyssey, All Quiet on the Western Front, and Moby-Dick. Soon after, writer Ben Greenman noted that in his lecture Dylan seemed to have invented a quote from Moby-Dick…I soon discovered that the Moby-Dick line Dylan dreamed up last week seems to be cobbled together out of phrases on the website SparkNotes, the online equivalent of CliffsNotes…Across the seventy-eight sentences in the lecture that Dylan spends describing Moby-Dick, even a cursory inspection reveals that more than a dozen of them appear to closely resemble lines from the SparkNotes site.

Without drilling into it too deeply, I’ll venture to say that if this all seems weird, it’s because Bob Dylan, of all people, after receiving the Nobel Prize for Literature, might have cribbed statements from an online study guide written by and for college students. But isn’t that how it always goes? Anecdotally speaking, plagiarists seem to draw from secondary or even tertiary sources, like encyclopedias, since the sort of careless or hurried writer vulnerable to indulging in it in the first place isn’t likely to grapple with the originals. The result is an inevitable degradation of information, like a copy of a copy. As Edward Tufte memorably observes in Visual Explanations: “Incomplete plagiarism leads to dequantification.” In context, he’s talking about the way in which illustrations and statistical graphics tend to lose data the more often they get copied. (In The Visual Display of Quantitative Information, he cites a particularly egregious example, in which a reproduction of a scatterplot “forgot to plot the points and simply retraced the grid lines from the original…The resulting figure achieves a graphical absolute zero, a null data-ink ratio.”) But it applies to all kinds of plagiarism, and it makes for a more compelling argument, I think, than the equally valid point that the author is cheating the source and the reader. In art or literature, it’s better to argue from aesthetics than ethics. If fact-checking strengthens a piece of writing, then plagiarism, with its effacing of sources and obfuscation of detail, can only weaken it. One is the opposite of the other, and it’s no surprise that the sins of plagiarism and fabrication tend to go together. They’re symptoms of the same underlying sloppiness, and this is why writers owe it to themselves—not to hypothetical readers or critics—to weed them out. A writer who is sloppy on small matters of fact can hardly avoid doing the same on the higher levels of an argument, and policing the one is a way of keeping an eye on the other. It isn’t always fun. But if you’re going to be a writer, as Dylan himself once said: “Now you’re gonna have to get used to it.”

The Travolta moment

with 2 comments

Jonathan Franzen

There’s a moment halfway through Jonathan Franzen’s The Corrections when Enid Lambert, the matriarch of the novel’s dysfunctional Midwestern family, visits a doctor on a cruise ship. It’s an important scene—Enid leaves with a handful of antidepressants that will play a big role later in the story—and Franzen lavishes his usual amount of care on the sequence, which runs for a full nine pages. But here’s how he chooses to describe the doctor on his first appearance:

He had a large, somewhat coarse-skinned face like the face of the Italian-American actor people loved, the one who once starred as an angel and another time as a disco dancer.

I adore The Corrections, but this is an embarrassing sentence—one of the worst I’ve ever seen pass the pen of a major novelist. It’s particularly surprising coming from Franzen, who has thought as urgently and probingly as any writer alive about the problem of voice. But it’s also the kind of lapse that turns out to be unexpectedly instructive, precisely because it comes from an author who really ought to know better.

So why does this sentence grate so much? Let’s break down the reasons one at a time:

  1. Franzen clearly wants to tell us that the doctor looks like John Travolta, but he’s too shy to come out and say so, so he uses more than twenty words to convey what could have easily been expressed in two.
  2. In the process, he’s false to his character. No woman of Enid’s generation and background would have any trouble coming up with Travolta’s name, especially if she were familiar with his role in Michael, of all movies. It’s not like she’s trying to remember, say, Richard Jenkins.
  3. Worst of all, it takes us out of the story. Instead of focusing on the moment—which happens to be a crucial turning point for Enid’s character—we’re distracted by Franzen’s failure of style.

And the punchline here is that a lesser novelist would simply have said that the doctor looked like Travolta and been done with it. Franzen, an agonizingly smart writer, senses how lazy this is, so he backs away, but not nearly far enough. And the result reads like nothing a recognizable human being would feel or say.

John McPhee

I got to thinking about this after reading John McPhee’s recent New Yorker piece about frames of reference. McPhee’s pet peeve is when authors describe a person’s appearance by leaning on a perceived resemblance to a famous face, as in this example from Ian Frazier: “She looks enough like the late Bea Arthur, the star of the nineteen-seventies sitcom Maude, that it would be negligent not to say so.” Clearly, if you don’t remember how Bea Arthur looks, this description isn’t very useful. And while any such discussion tends to turn into a personal referendum on which references are obvious and which aren’t—McPhee claims he doesn’t know who Gene Wilder is, for instance—his point is a valid one:

If you say someone looks like Tom Cruise—and you let it go at that—you are asking Tom Cruise to do your writing for you. Your description will fail when your reader doesn’t know who Tom Cruise is.

And references that seem obvious now may not feel that way in twenty years. McPhee concludes, reasonably, that if you’re going to compare a character to a celebrity, you need to pay back that borrowed vividness by amplifying it with a line of description of your own, as when Joel Achenbach follows up his reference to Gene Wilder by referring to the subject’s “manic energy.”

When we evaluate Franzen’s Travolta moment in this light, it starts to look even worse. It reminds me a little of the statistician Edward Tufte, who famously declared that graphical excellence gives the viewer the greatest number of ideas in the shortest time with the least ink in the smallest space. In his classic The Visual Display of Quantitative Information, he introduces the concept of the data-ink ratio, which consists of the amount of data ink divided by the total ink used to print a statistical graphic. (“Data ink” is the ink in a graph or chart that can’t be erased without a real loss of information.) Ideally, as large a proportion of the ink as possible should be devoted to the presentation of the data, rather than to redundant material. As an example of the ratio at its worst, Tufte reprints a graph from a textbook that erased all the data points while retaining the grid lines, noting drily: “The resulting figure achieves a graphical absolute zero, a null data-ink ratio.” And that’s what Franzen gives us here. In twenty words, he offers no information that the reader isn’t asked to supply on his or her own. To be fair, Franzen is usually better than this. But here, it’s like giving us a female character and saying that she looks like Adele Dazeem.

An alternative library of creativity

leave a comment »

Adhocism

If you want to be a writer, there are plenty of guidebooks and manuals available, and some of them are very good. When you’re stuck on a particular narrative problem or trying to crack a story, though, you’ll often find that it’s helpful to approach it from an alternative angle, or to apply tactics and techniques from an unrelated creative field. I’ve always found inspiration from works intended for other disciplines, so here’s a sampling, in chronological order of original publication, of ten I’ve found consistently stimulating:

Magic and Showmanship (1969) by Henning Nelms. A magic trick is a work of theater in miniature, and writers can learn a lot from the insights that sleight of hand affords into the use of staging, emphasis, and misdirection, as tested under particularly unforgiving conditions. This book by the great Henning Nelms is the most useful work on the subject I’ve found from the perspective of storytelling and performance, and it’s particularly helpful on the subjects of clarity and dramatic structure.

Adhocism: The Case for Improvisation (1972) by Charles Jencks and Nathan Silver. An eccentric, highly opinionated meditation on bricolage, or the art of making do with whatever happens to be at hand, which is something writers do all the time. (The real trick is taking a story assembled out of odds and ends and making the result seem inevitable.) Out of print for many years, it was recently reissued in a handsome new edition that belongs on the shelf of any artist or designer.

A Pattern Language

The Little Lisper (1974) by Daniel P. Friedman and Matthias Felleisen. Coding is a surprisingly valuable field for writers to study, since it deals directly with problems of structure, debugging, and managing complex projects. I could have named any number of books here—Programmers at Work and its successor Coders at Work are also worth seeking out—but this classic work on the Lisp programming language, later updated as The Little Schemer, is particularly elegant, with a focus on teaching the reader how to think recursively.

A Pattern Language (1977) by Christopher Alexander. Alexander’s magnum opus—which is one of the two or three books I’d take with me if I couldn’t own any others—is ostensibly about architecture, but its greatest influence has been in outlying fields like software design. This isn’t surprising, because it’s really a book about identifying patterns that live, defining them as strictly as possible while leaving room for intuition, and building them up into larger structures, all from the perspective of those who use them every day. Which is what creativity, of any kind, is all about.

Disney Animation: The Illusion of Life (1981) by Ollie Johnston and Frank Thomas. I’ve always been fascinated by animation, which scales up from the simplest possible tools and materials—a pencil, a pad of paper, a hand to flip the pages—to collaborative efforts of enormous complexity that can require years of effort. Not surprisingly, its traditions, tricks, and rules of thumb have plenty to teach storytellers of all kinds, and this work by two of Disney’s Nine Old Men comes as close as a book can to providing an education on the subject between covers.

The Visual Display of Quantitative Information

The Visual Display of Quantitative Information (1983) by Edward Tufte. Tufte’s rules for clarity and simplicity in the presentation of statistics apply as much to writing as to charts and graphs, and his ruthless approach to eliminating “chartjunk” is one that more authors and editors could stand to follow. (“Graphical excellence is that which gives to the viewer the greatest number of ideas in the shortest time with the least ink in the smallest space.”) His other books—Envisioning Information, Visual Explanations, and Beautiful Evidence—are also essential, hugely pleasurable reads.

On Directing Film (1992) by David Mamet. I’ve spoken about this book endlessly before, but it’s still the single best introduction I’ve found to the basic principles of storytelling. (In the meantime, I’ve also learned how much Mamet owes to the works of Stanslavski, particularly the chapter “Units” from An Actor Prepares.) It’s the closest thing I’ve seen to a set of immediately applicable tools that solve narrative problems under all circumstances, and although it can be read in less than an hour, it takes a lifetime to put it into practice.

Behind the Seen (2004) by Charles Koppelman. The problem that a film editor faces is a heightened version of what every artist confronts. Given a large body of raw material, how do you give it a logical shape and pare it down to its ideal length? The physical and logistical demands of the job—Walter Murch notes that an editor needs a strong back and arms—has resulted in a large body of practical knowledge, and this loving look at Murch’s editing of Cold Mountain using Final Cut Pro is the best guide in existence to what the work entails.

Field Notes on Science and Nature

Finishing the Hat (2010) by Stephen Sondheim. Sondheim’s candid, often critical look at his own early lyrics shows the development of a major artist in real time, as he strives to address the basic challenge of conveying information to an audience through song. Cleverness, he finds, only takes you so far: the real art lies in finding a form to fit the content, doing less with more, and navigating the countless tiny decisions that add up to the ultimate effect. “All in the service of clarity,” Sondheim concludes, “without which nothing else matters.”

Field Notes on Science and Nature (2011) by Michael Canfield. Much of the creative process boils down to keeping good notes, which both serve to record one’s observations and to lock down insights that might seem irrelevant now but will become crucial later on. Scientists understand this as well as anyone, and there’s an unexpected degree of art in the process of recording data in the field. It’s impossible to read this beautiful book without coming away with new thoughts on how to live more fully through one’s notes, which is where a writer spends half of his or her time.

Looking at the books I’ve cited above, I find that they have two things in common: 1) An emphasis on clarity above all else. 2) A series of approaches to building complex structures out of smaller units. There’s more to writing than this, of course, and much of what authors do intuitively can’t be distilled down to a list of rules. But seeing these basic principles restated in so many different forms only serves as a reminder of how essential they are. Any one of these books can suggest new approaches to old problems, so you can start almost anywhere, and in the end, you find that each one leads into all the rest.

When reality isn’t good enough

with 2 comments

Is reality a bore? Well, it depends on who you ask. Edward R. Tufte, in his wonderful book The Visual Display of Quantitative Information, devotes many pages to combating the assumption that data and statistics, being inherently dull, need to be dressed up with graphics and bright colors to catch the reader’s eye. The result, Tufte argues, is chartjunk, ink wasted on flashy design elements that have nothing to do with the information presented. Instead of investing resources in tarting up uninformative numbers, he says, one’s time is much better spent unearthing and analyzing relevant information. The best data, presented simply, will inspire surprise and curiosity, but only if the numbers are interesting and accurate, which requires its own kind of skill, ingenuity, and patience. Tufte sums up his case magnificently: “If the statistics are boring, then you’ve got the wrong numbers. Finding the right numbers requires as much specialized skill—statistical skill—and hard work as creating a beautiful design or covering a complex news story.”

Replace “statistics” with “stories,” and “numbers” with “facts,” and Tufte’s sound advice applies equally well to authors of nonfiction. It rings especially true in light of a number of recent controversies, both of which center on the question of when, if ever, reality should be manipulated for artistic reasons. One is the release of The Lifespan of a Fact, a book chronicling the five-year struggle between essayist John D’Agata and factchecker Jim Fingal over the accuracy of an essay finally published by The Believer. The other, of course, is the furor over a recent episode of This American Life, in which Mike Daisey’s account of his visit to a Chinese factory making components for Apple was revealed to have substantial fabrications. These are very different cases, of course, each with its own underlying motivations, but both are rooted in the assumption that reality, by itself, isn’t good enough. This led D’Agata and Daisey to embellish their stories with what might, at best, be termed “artistic” truth, but which can also be seen as the prose equivalent of chartjunk: falsehoods inserted to punch up the uncolorful facts.

D’Agata’s case is arguably the more instructive, because it’s founded on what appears to be a genuine artistic interest in blurring the lines between fiction and nonfiction. The original version of his essay, which uses the real suicide of a young man named Levi Presley as a means of exploring the culture of Las Vegas, contained countless departures from the facts, all purportedly for artistic reasons. Some were minor, such as changing the color of some vans from pink to purple because it scanned better, while others were fundamental: in his first paragraph, D’Agata refers to a series of strange events that occurred on the day of Presley’s suicide, including a tic-tac-toe contest against a chicken—none of which actually took place on the day in question. In other words, his list of unbelievable facts is literally unbelievable, because he made them up. In D’Agata’s hands, truth isn’t stranger than fiction; instead, fiction is exactly as strange as fiction, which raises the question of why we should care. In the end, his inability to find the real Las Vegas sufficiently colorful comes off as a failure of will, and the fact that he embellishes facts throughout the essay while keeping Levi Presley’s real name—presumably to gain a free artistic frisson from the circumstances of an actual suicide—seems like a particularly unfortunate case of wanting to have it both ways.

At least D’Agata has some kind of literary philosophy, however misguided, to justify his deviations from the truth (although it should be noted that most readers of The Believer presumably read his article as straight journalism). The same can’t be said of Mike Daisey, who altered the facts in The Agony and the Ecstasy of Steve Jobs to make it sound as if he personally witnessed events that occurred a thousand miles away, and to manufacture completely imaginary incidents for the sake of manipulating the audience. (In retrospect, it’s especially horrifying to hear Daisey’s voice grow soft and choked as he describes an injured factory worker’s first encounter with an iPad, a fictional incident that he describes as if it actually took place.) Daisey’s excuse, unlike D’Agata’s, is an emotional one: he wanted the audience to feel something, to be touched, implying that the true facts of his trip weren’t moving enough. Meanwhile, the legitimate journalism on Chinese factory conditions, as conducted by such reporters as Charles Duhigg and David Barboza of the New York Times, is far more fascinating, and it doesn’t depend on fabricated melodrama to make an impact.

As Tufte says, if the facts are boring, you’re using the wrong facts. But isn’t there a place for the judicious mingling of reality with fiction? Tomorrow, I’ll be talking more about this, and the importance of truth in labeling.

Quote of the Day

leave a comment »

Written by nevalalee

March 19, 2012 at 7:50 am

%d bloggers like this: