Posts Tagged ‘Jonah Lehrer’
It’s never easy to predict the future, and on this particular blog, I try to avoid such prognostications, but just this once, I’m going to go out on a limb: I think there’s a good chance that Nate Silver will be Time‘s Man of the Year. Silver wasn’t the only talented analyst tracking the statistical side of the presidential race, but he’s by far the most visible, and he serves as the public face for the single most important story of this election: the triumph of information. Ultimately, the polls, at least in the aggregate, were right. Silver predicted the results correctly in all fifty states, and although he admits that his call in Florida could have gone either way, it’s still impressive—especially when you consider that his forecasts of the vote in individual swing states were startlingly accurate, differing from the final tally overall by less than 1.5%. At the moment, Silver is in an enviable position: he’s a public intellectual whose word, at least for now, carries immense weight among countless informed readers, regardless of the subject. And the real question is what he intends to do with this power.
I’ve been reading Silver for years, but after seeing him deliver a talk last week at the Chicago Humanities Festival, I emerged feeling even more encouraged by his newfound public stature. Silver isn’t a great public speaker: his presentation consisted mostly of slides drawn from his new book, The Signal and the Noise, and he sometimes comes across as a guy who spent the last six months alone in a darkened room, only to be thrust suddenly, blinking, into the light. Yet there’s something oddly reassuring about his nerdy, somewhat awkward presence. This isn’t someone like Jonah Lehrer, whose polished presentations at TED tend to obscure the fact that he doesn’t have many original ideas of his own, as was recently made distressingly clear. Silver is the real thing, a creature of statistics and spreadsheets who claims, convincingly, that if Excel were an Olympic sport, he’d be competing on the U.S. team. In person, he’s more candid and profane than in his lucid, often technical blog posts, but the impression one gets is of a man who has far more ideas in his head than he’s able to express in a short talk.
And his example is an instructive one, even to those of us who pay attention to politics only every couple of years, and who don’t have much of an interest in poker or baseball, his two other great obsessions. Silver is a heroic figure in an age of information. In his talk, he pointed out that ninety percent of the information in the world was created over the last two years, which makes it all the more important to find ways of navigating it effectively. With all the data at our disposal, it’s easy to find evidence for any argument we want to make: as the presidential debates made clear, there’s always a favorable poll or study to cite in our favor. Silver may have found his niche in politics, but he’s really an exemplar of how to intelligently read any body of publicly available information. We all have access to the same numbers: the question is how to interpret them, and, even more crucially, how to deal with information that doesn’t support our own beliefs. (Silver admits that he’s generally left of center in his own politics, but I almost wish that he were a closet conservative who was simply reporting the numbers as objectively as he could.)
But the most important thing about Silver is that he isn’t a witch. He predicted the election results better than almost anyone else, but he wasn’t alone: all of the major poll aggregators called the presidential race correctly, often using nothing more complicated than a simple average of polls, which implies that what Silver did was relatively simple, once you’ve made the decision to follow the data wherever it goes. And unlike most pundits, Silver has an enormous incentive to be painstaking in his methods. He knows that his reputation is based entirely on his accuracy, which made the conservative accusation that he was skewing the results seem ludicrous even at the time: he had much more to lose, over the long term, by being wrong. And it makes me very curious about his next move. At his talk, Silver pointed out that politics, unlike finance, was an easy target for statistical rigor: “You can look smart just by being pretty good.” Whether he can move beyond the polls into other fields remains to be seen, but I suspect that he’ll be both smart and cautious. And I can’t wait to see what he does next.
Back in June, when it was first revealed that Jonah Lehrer had reused some of his own work without attribution on the New Yorker blog, an editor for whom I’d written articles in the past sent me an email with the subject line: “Mike Daisey…Jonah Lehrer?” When he asked if I’d be interested in writing a piece about it, I said I’d give it a shot, although I also noted: “I don’t think I’d lump Lehrer in with Daisey just yet.” And in fact, I’ve found myself writing about Lehrer surprisingly often, in pieces for The Daily Beast, The Rumpus, and this blog. If I’ve returned to Lehrer more than once, it’s because I enjoyed a lot of his early work, was mystified by his recent problems, and took a personal interest in his case because we’re about the same age and preoccupied with similar issues of creativity and imagination. But with the revelation that he fabricated quotes in his book and lied about it, as uncovered by Michael C. Moynihan of Tablet, it seems that we may end up lumping Lehrer in with Mike Daisey after all. And this makes me very sad.
What strikes me now is the fact that most of Lehrer’s problems seem to have been the product of haste. He evidently repurposed material on his blog from previously published works because he wasn’t able to produce new content at the necessary rate. The same factor seems to have motivated his uncredited reuse of material in Imagine. And the Bob Dylan quotes he’s accused of fabricating in the same book are so uninteresting (“It’s a hard thing to describe. It’s just this sense that you got something to say”) that it’s difficult to attribute them to calculated fraud. Rather, I suspect that it was just carelessness: the original quotes were garbled in editing, compression, or revision, with Lehrer forgetting where Dylan’s quote left off and his own paraphrase begin. A mistake entered one draft and persisted into the next until it wound up in the finished book. And if there’s one set of errors like this, there are likely to be others—Lehrer’s mistakes just happened to be caught by an obsessive Dylan fan and a very good journalist.
Such errors are embarrassing, but they aren’t hard to understand. I’ve learned from experience that if I quote something in an article, I’d better check it against the source at least twice, because all kinds of gremlins can get their claws into it in the meantime. What sets Lehrer’s example apart is that the error survived until the book was in print, which implies an exceptional amount of sloppiness, and when the mistake was revealed, Lehrer only made it worse by lying. As Daisey recently found out, it isn’t the initial mistake that kills you, but the coverup. If Lehrer had simply granted that he couldn’t source the quote and blamed it on an editing error, it would have been humiliating, but not catastrophic. Instead, he spun a comically elaborate series of lies about having access to unreleased documentary footage and being in contact with Bob Dylan’s management, fabrications that fell apart at once. And while I’ve done my best to interpret his previous lapses as generously as possible, I don’t know if I can do that anymore.
In my piece on The Rumpus, I said that Lehrer’s earlier mistakes were venial sins, not mortal ones. Now that he’s slid into the area of mortal sin—not so much for the initial mistake, but for the lies that followed—it’s unclear what comes next. At the time, I wrote:
Lehrer, who has written so often about human irrationality, can only benefit from this reminder of his own fallibility, and if he’s as smart as he seems, he’ll use it in his work, which until now has reflected wide reading and curiosity, but not experience.
Unfortunately, this is no longer true. I don’t think this is the end of Lehrer’s story: he’s undeniably talented, and if James Frey, of all people, can reinvent himself, Lehrer should be able to do so as well. And yet I’m afraid that there are certain elements of his previous career that will be closed off forever. I don’t think we can take his thoughts on the creative process seriously any longer, now that we’ve seen how his own process was so fatally flawed. There is a world elsewhere, of course. And Lehrer is still so young. But where he goes from here is hard to imagine.
Last week, The Rumpus published an essay I’d written about Jonah Lehrer, the prolific young writer on science and creativity who had been caught reusing portions of previously published articles on his blog at The New Yorker. I defended Lehrer from some of the more extreme charges—for one thing, I dislike the label “self-plagiarism,” which misrepresents what he actually did—and tried my best to understand the reasons behind this very public lapse of judgment. And while only Lehrer really knows what he was thinking, I think it’s fair to conclude, as I do in my essay, that his case is inseparable from the predicament of many contemporary writers, who are essentially required to become nonstop marketers of themselves. The acceleration of all media has produced a ravenous appetite for content, especially online, forcing authors to run a Red Queen’s race to keep up with demand. And when a writer is expected to blog, publish articles, give talks, and produce new books on a regular basis, it’s no surprise if the work starts to suffer.
The irony, of course, is that I’m just as guilty of this as anyone else. I think of myself primarily as a novelist, but over the past couple of years, I’ve found myself wearing a lot of different hats. I blog every day. I work as hard as possible to get interviews, panel discussions, and radio appearances to talk about my work. I’ve been known to use Twitter and Facebook. And I publish a lot of nonfiction, up to and including my essay at The Rumpus itself. I do it mostly because I like it—and I like getting paid for it when I can—but I also do it to get my name out there, along with, hopefully, the title of my book. I suspect that a lot of other writers would say the same thing, and that few guest reviews, essays, or opinion pieces are ever published without some ulterior motive on the part of the author, especially if that author happens to have a novel in stores. And while I think that most readers are aware of this, and adjust their perceptions accordingly, it’s also worth asking what this does to the writer’s own work.
The process of marketing puts any decent writer in a bind. To become a good novelist, you need to develop a skill set centered on solitude and introversion: you have to be physically and emotionally capable of sitting at a desk, alone, without distraction, for weeks or months at a time. The instant your novel comes out, however, you’re suddenly expected to develop the opposite set of skills, becoming extroverted, gregarious, and willing to invest huge amounts of energy into selling yourself in public. Very few writers, aside from the occasional outlier like Gore Vidal or Norman Mailer, have ever seemed comfortable in both roles, which create a real tension in a writer’s life. As I note in my article on Lehrer, the kind of routine required of most mainstream authors these days is antithetical to the kind of solitary, unrewarding activity needed for real creative work. Creativity requires uninterrupted time, silence, and the ability to concentrate on one problem to the exclusion of everything else. Marketing yourself at the same time is more like juggling, or, even better, like spinning plates, with different parts of your life receiving more or less attention until they need a nudge to keep them going.
When an author lets one of the plates fall, as Lehrer has done so publicly, it’s reasonable to ask whether the costs of this kind of career outweigh the rewards. I’ve often wondered about this myself. And the only answer I can give is that none of this is worth doing unless the different parts give you satisfaction for their own sake. There’s no guarantee that any of the work you do will pay off in a tangible way, so if you spend your time on something only for its perceived marketing benefits, the result will be cynical or worse. And my own attitudes about this have changed over time. This blog began, frankly, as an attempt to build an online audience in advance of The Icon Thief, but after blogging every day for almost two years, it’s become something much more—a huge part of my identity as a writer. The same is true, I hope, of my essays and short fiction. No one piece counts for much, but when I stand back and take them all together, I start to dimly glimpse the shape of my career. I wouldn’t have done half of this without the imperatives of the market. And for that, weirdly, I’m grateful.
On Tuesday, in an article in The Daily Beast, I sampled some of the recent wave of books on consciousness and creativity, including Imagine by Jonah Lehrer and The Power of Habit by Charles Duhigg, and concluded that while such books might make us feel smarter, they aren’t likely to make us more creative or rational than we already were. As far as creativity is concerned, I note, there are no easy answers: even the greatest creative geniuses, like Bach, tend to have the same ratio of hits to misses as their forgotten contemporaries, which means that the best way to have a good idea is simply to have as many ideas, good or bad, as possible. And I close my essay with some genuinely useful advice from Dean Simonton, whom I’ve quoted on this blog before: “The best a creative genius can do is to be as prolific as possible in generating products in hope that at least some subset will survive the test of time.”
So does that mean that all other advice on creativity is worthless? I hope not, because otherwise, I’ve been wasting a lot of time on this blog. I’ve devoted countless posts to discussing creativity tools like intentional randomness and mind maps, talking about various methods of increasing serendipity, and arguing for the importance of thinking in odd moments, like washing the dishes or shaving. For my own part, I still have superstitious habits about creativity that I follow every day. I never write a chapter or essay without doing a mind map, for instance—I did the one below before writing the article in the Beast—and I still generate a random quote from Shakespeare whenever I’m stuck on a problem. And these tricks seem to work, at least for me: I always end up with something that would have occurred to me if I hadn’t taken the time.
Yet the crucial word is that last one. Because the more I think about it, the more convinced I am that every useful creativity tool really boils down to just one thing—increasing the amount of time, and the kinds of time, I spend thinking about a problem. When I do a mind map, for instance, I follow a fixed, almost ritualistic set of steps: I take out a pad of paper, write a keyword or two at the center in marker, and let my pen wander across the page. All these steps take time. Which means that making a mind map generates a blank space of forty minutes or so in which I’m just thinking about the problem at hand. And it’s become increasingly clear to me that it isn’t the mind map that matters; it’s the forty minutes. The mind map is just an excuse for me to sit at my desk and think. (This is one reason why I still make my mind maps by hand, rather than with a software program—it extends the length of the process.)
In the end, the only thing that can generate ideas is time spent thinking about them. (Even apparently random moments of insight are the result of long conscious preparation.) I’ve addressed this topic before in my post about Blinn’s Law, in which I speculate that every work of art—a novel, a movie, a work of nonfiction—requires a certain amount of time to be fully realized, no matter how far technology advances, and that much of what we do as artists consists of finding excuses to sit alone at our desks for the necessary year or so. Nearly every creativity tool amounts to a way of tricking my brain into spending time on a problem, either by giving it a pleasant and relatively undemanding task, like drawing a mind map, or seducing it with a novel image or idea that makes its train of thought momentarily more interesting. But the magic isn’t in the trick itself; it’s in the time that follows. And that’s the secret of creativity.
Yesterday, while talking about my search for serendipity in the New York Times, I wrote: “What the [Times's] recommendation engine thought I might like to see was far less interesting than what other people unlike me were reading at the same time.” The second I typed that sentence, I knew it wasn’t entirely true, and the more I thought about it, the more questions it seemed to raise. Because, really, most readers of the Times aren’t that much unlike me. The site attracts a wide range of visitors, but its ideal audience, the one it targets and the one that embodies how most of its readers probably like to think of themselves, is fairly consistent: educated, interested in the politics and the arts, more likely to watch Mad Men than Two and a Half Men, and rather more liberal than otherwise. The “Most Emailed” list isn’t exactly a random sampling of interesting stories, then, but a sort of idealized picture of what the perfect Times subscriber, with equal access to all parts of the paper, is reading at that particular moment.
As a result, the “serendipity” we find there tends to be skewed in predictable ways. For instance, you’re much more likely to see a column by Paul Krugman than by my conservative college classmate Ross Douthat, who may be a good writer who makes useful points, but you’d never know it based on how often his columns are shared. (I don’t have any hard numbers to back this up, but I’d guess that Douthat’s columns make the “Most Emailed” list only a fraction of the time.) If I were really in search of true serendipity—that is, to quote George Steiner, if I was trying to find what I wasn’t looking for—I’d read the most viewed or commented articles on, say, the National Review, or, better yet, the National Enquirer, the favorite paper of both Victor Niederhoffer and Nassim Nicholas Taleb. But I don’t. What I really want as a reader, it seems, isn’t pure randomness, but the right kind of randomness. It’s serendipity as curated by the writers and readers of the New York Times, which, while interesting, is only a single slice of the universe of randomness at my disposal.
Is this wrong? Not necessarily. In fact, I’d say there are at least two good reasons to stick to a certain subset of randomness, at least on a daily basis. The first reason has something in common with Brian Uzzi’s fascinating research on the collaborative process behind hit Broadway shows, as described in Jonah Lehrer’s Imagine. What Uzzi discovered is that the most successful shows tended to be the work of teams of artists who weren’t frequent collaborators, but weren’t strangers, either. An intermediate level of social intimacy—not too close, but not too far away—seemed to generate the best results, since strangers struggled to find ways of working together, while those who worked together all the time tended to fall into stale, repetitive patterns. And this strikes me as being generally true of the world of ideas as well. Ideas that are too similar don’t combine in interesting ways, but those that are too far apart tend to uselessly collide. What you want, ideally, is to live in a world of good ideas that want to cohere and set off chains of associations, and for this, an intermediate level of unfamiliarity seems to work the best.
And the second reason is even more important: it’s that randomness alone isn’t enough. It’s good, of course, to seek out new sources of inspiration and ideas, but if done indiscriminately, the result is likely to be nothing but static. Twitter, for instance, is as pure a slice of randomness as you could possibly want, but we very properly try to manage our feeds to include those people we like and find interesting, rather than exposing ourselves to the full noise of the Twitterverse. (That way lies madness.) Even the most enthusiastic proponent of intentional randomness, like me, has to admit that not all sources of information are created equal, and that it’s sometimes necessary to use a trusted home base for our excursions into the unknown. When people engage in bibliomancy—that is, in telling the future by opening a book to a random page—there’s a reason why they’ve historically used books like Virgil or the Bible, rather than Harlequin romance: any book would generate the necessary level of randomness, but you need a basic level of richness and meaning as well. What I’m saying, I guess, is that if you’re going to be random, you may as well be systematic about it. And the New York Times isn’t a bad place to start.
I always look at a piece of music like a detective novel. Maybe the novel is about a murder. Well, who committed the murder? Why did he do it? My job is to retrace the story so that the audience feels the suspense. So that when the climax comes, they’re right there with me, listening to my beautiful detective story. It’s all about making people care about what happens next.
Last week, while browsing at Open Books in Chicago, I made one of those serendipitous discoveries that are the main reason I love used bookstores: a vintage copy of The Tangled Bank by Stanley Edgar Hyman, which I picked up for less than seven dollars. Both the author and his work are mostly forgotten these days—Hyman is remembered, if anything, for his marriage to Shirley Jackson—but this book caught my attention right away. It’s an ambitious attempt to consider Darwin, Marx, Frazer, and Freud as imaginative writers who made their arguments using the strategies of narrative artists and storytellers, and as such, it’s a great bedside book, if not completely successful. Hyman obsessively details how books like Das Kapital mimic the tropes of narrative art (“The dramatic movement of Capital consists of four descents into suffering and horror, which we might see as four acts of a drama”) while neglecting the main point: if authors like this are storytellers, it’s because they’ve turned themselves into the protagonists of their own books, with their attempts to impose order on reality as their most enduring literary monuments.
And we may never see such protagonists again. If Darwin or Freud are literary characters as memorable as Pickwick or Hamlet, it’s in the tradition of the solitary man of genius considering the world through the lens of his own experience, a figure who has, of necessity, gone out of fashion in the sciences. As Jonah Lehrer recently pointed out in the New Yorker, the era of the lone genius is over:
Today…science papers by multiple authors contain more than twice as many citations as those by individuals. This trend was even more apparent when it came to so-called “home-run papers”—publications with at least a hundred citations. These were more than six times as likely to come from a team of scientists.
The explanation for this is easy enough to understand: most remaining scientific problems are far too hard for any one person to solve. Scientists are increasingly forced to specialize, and tackling important problems requires a greater degree of collaboration than ever before. This leads to its own kind of creative exhilaration, and perhaps a different model of genius, as that of a visionary who can guide and direct a diverse team of talents, like Steve Jobs or Robert Oppenheimer. But it’s unlikely that we’ll ever get to know such thinkers as living men and women, at least not as well as the ones profiled in The Tangled Bank.
Of these four, the one who interests me the most these days is Darwin, whose birthday was this past Sunday. (And while I’m on the subject, if you haven’t picked up a copy of Darwin Slept Here, by my good friend Eric Simons, you really should.) Darwin emerges in his own works as a fascinating figure, a ceaseless experimenter whose work is inseparable from the image of the man himself. One of the pleasures of The Tangled Bank lies in its reminder of how ingenious a scientist Darwin was. To compare the area of geological formations on a topographical map, he cut them out and weighed the paper. He tickled aphids with a fine hair and made artificial leaves for earthworms by rubbing triangular pieces of paper with raw fat. And this impression of Sherlockian thoroughness, of leaving no experimental stone unturned, is more than just a literary delight: it’s an integral part of the persuasiveness of The Origin of Species, which is convincing as an argument largely because we’re so charmed by the author’s voice.
As Daniel C. Dennett has famously argued, Darwinian evolution is probably the best idea of all time, but it’s also impossible to separate the idea from the man, who survives in his own work as one of the great literary characters of the nineteenth century. It’s true that if Darwin, or Alfred Russel Wallace, hadn’t arrived at the principle of natural selection, somebody else would have done so eventually: it’s one of those ideas that seem obvious in retrospect. (After reading The Origin of Species, Thomas Huxley is supposed to have said: “How extremely stupid not to have thought of that!”) But there’s no denying that the force and appeal of the book itself, which Darwin worked on quietly for years, bears a great deal of the credit for the theory’s rapid acceptance, at least among reasonable readers. Without that presentation, and the author’s personality, the history of the world might have been very different. And for that, we have literary genius to thank.
Where do good ideas come from? A recent issue of the New Yorker offers up a few answers, in a fascinating article on the science of groupthink by Jonah Lehrer, who debunks some widely cherished notions about creative collaboration. Lehrer suggests that brainstorming—narrowly defined as a group activity in which a roomful of people generates as many ideas as possible without pausing to evaluate or criticize—is essentially useless, or at least less effective than spirited group debate or working alone. The best kind of collaboration, he says, occurs when people from diverse backgrounds are thrown together in an environment where they can argue, share ideas, or simply meet by chance, and he backs this up with an impressive array of data, ranging from studies of the genesis of Broadway musicals to the legendary Building 20 at MIT, where individuals as different as Amar Bose and Noam Chomsky thrived in an environment in which the walls between disciplines could literally be torn down.
What I love about Lehrer’s article is that its vision of productive group thinking isn’t that far removed from my sense of what writers and other creative artists need to do on their own. The idea of subjecting the ideas in brainstorming sessions to a rigorous winnowing process has close parallels to Dean Simonton’s Darwinian model of creativity: quality, he notes, is a probabilistic function of quantity, so the more ideas you have, the better—but only if they’re subjected to the discipline of natural selection. This selection can occur in the writer’s mind, in a group, or in the larger marketplace, but the crucial thing is that it take place at all. Free association or productivity isn’t enough without that extra step of revision, or rendering, which in most cases requires a strong external point of view. Hence the importance of outside readers and editors to every writer, no matter how successful.
The premise that creativity flowers most readily from interactions between people from different backgrounds has parallels in one’s inner life as well. In The Act of Creation, Arthur Koestler concludes that bisociation, or the intersection of two unrelated areas of knowledge in unexpected ways, is the ultimate source of creativity. On the highest plane, the most profound innovations in science and the arts often occur when an individual of genius changes fields. On a more personal level, nearly every good story idea I’ve ever had came from the juxtaposition of two previously unrelated concepts, either done on purpose—as in my focused daydreaming with science magazines, which led to stories like “Kawataro,” “The Boneless One,” and “Ernesto”—or by accident. Even accidents, however, can benefit from careful planning, as in the design of the Pixar campus, as conceived by Steve Jobs, in which members of different departments have no choice but to cross paths on their way to the bathroom or cafeteria.
Every creative artist needs to find ways of maximizing this sort of serendipity in his or her own life. My favorite personal example is my own home library: partially out of laziness, my bookshelves have always been a wild jumble of volumes in no particular order, an arrangement that sometimes makes it hard to find a specific book when I need it, but also leads to serendipitous arrangements of ideas. I’ll often be looking for one book when another catches my eye, even if I haven’t read it in years, which takes me, in turn, in unexpected directions. Even more relevant to Lehrer’s article is the importance of talking to people from different fields: writers benefit enormously from working around people who aren’t writers, which is why college tends to be a more creatively fertile period than graduate school. “It is the human friction,” Lehrer concludes, “that makes the sparks.” And we should all arrange our lives accordingly.