Posts Tagged ‘Amazon’
Peak television and the future of stardom
Earlier this week, I devoured the long, excellent article by Josef Adalian and Maria Elena Fernandez of Vulture on the business of peak television. It’s full of useful insights and even better gossip—and it names plenty of names—but there’s one passage that really caught my eye, in a section about the huge salaries that movie stars are being paid to make the switch to the small screen:
A top agent defends the sums his clients are commanding, explaining that, in the overall scheme of things, the extra money isn’t all that significant. “Look at it this way,” he says. “If you’re Amazon and you’re going to launch a David E. Kelley show, that’s gonna cost $4 million an episode [to produce], right? That’s $40 million. You can have Bradley Whitford starring in it, [who is] gonna cost you $150,000 an episode. That’s $1.5 million of your $40 million. Or you could spend another $3.5 million [to get Costner] on what will end up being a $60 million investment by the time you market and promote it. You can either spend $60 [million] and have the Bradley Whitford show, or $63.5 [million] and have the Kevin Costner show. It makes a lot of sense when you look at it that way.”
With all due apologies to Bradley Whitford, I found this thought experiment fascinating, and not just for the reasons that the agent presumably shared it. It implies, for one thing, that television—which is often said to be overtaking Hollywood in terms of quality—is becoming more like feature filmmaking in another respect: it’s the last refuge of the traditional star. We frequently hear that movie stardom is dead and that audiences are drawn more to franchises than to recognizable faces, so the fact that cable and streaming networks seem intensely interested in signing film stars, in a post-True Detective world, implies that their model is different. Some of it may be due to the fact, as William Goldman once said, that no studio executive ever got fired for hiring a movie star: as the new platforms fight to establish themselves, it makes sense that they’d fall back on the idea of star power, which is one of the few things that corporate storytelling has ever been able to quantify or understand. It may also be because the marketing strategy for television inherently differs from that for film: an online series is unusually dependent on media coverage to stand out from the pack, and signing a star always generates headlines. Or at least it once did. (The Vulture article notes that Woody Allen’s new series for Amazon “may end up marking peak Peak TV,” and it seems a lot like a deal that was made for the sake of the coverage it would produce.)
But the most plausible explanation lies in simple economics. As the article explains, Netflix and the other streaming companies operate according to a “cost-plus” model: “Rather than holding out the promise of syndication gold, the company instead pays its studio and showrunner talent a guaranteed up-front profit—typically twenty or thirty percent above what it takes to make a show. In exchange, it owns all or most of the rights to distribute the show, domestically and internationally.” This limits the initial risk to the studio, but also the potential upside: nobody involved in producing the show itself will see any money on the back end. In addition, it means that even the lead actors of the series are paid a flat dollar amount, which makes them a more attractive investment than they might be for a movie. Most of the major stars in Hollywood earn gross points, which means that they get a cut of the box office receipts before the film turns a profit—a “first dollar” deal that makes the mathematics of breaking even much more complicated. The thought experiment about Bradley Whitford and Kevin Costner only makes sense if you can get Costner at a fixed salary per episode. In other words, movie stars are being actively courted by television because its model is a throwback to an earlier era, when actors were held under contract by a studio without any profit participation, and before stars and their agents negotiated better deals that ended up undermining the economic basis of the star system entirely.
And it’s revealing that Costner, of all actors, appears in this example. His name came up mostly because multiple sources told Vulture that he was offered $500,000 per episode to star in a streaming series: “He passed,” the article says, “but industry insiders predict he’ll eventually say ‘yes’ to the right offer.” But he also resonates because he stands for a kind of movie stardom that was already on the wane when he first became famous. It has something to do with the quintessentially American roles that he liked to play—even JFK is starting to seem like the last great national epic—and an aura that somehow kept him in leading parts two decades after his career as a major star was essentially over. That’s weirdly impressive in itself, and it testifies to how intriguing a figure he remains, even if audiences aren’t likely to pay to see him in a movie. Whenever I think of Costner, I remember what the studio executive Mike Medavoy once claimed to have told him right at the beginning of his career:
“You know,” I said to him over lunch, “I have this sense that I’m sitting here with someone who is going to become a great big star. You’re going to want to direct your own movies, produce your own movies, and you’re going to end up leaving your wife and going through the whole Hollywood movie-star cycle.”
Costner did, in fact, end up leaving his first wife. And if he also leaves film for television, even temporarily, it may reveal that “the whole Hollywood movie-star cycle” has a surprising final act that few of us could have anticipated.
Santa Claus conquers the Martians
Like most households, my family has a set of traditions that we like to observe during the holiday season. A vinyl copy of A Charlie Brown Christmas spends most of December on our record player, and I never feel as if I’m really in the spirit of things until I’ve listened to Kokomo Jo’s Caribbean Christmas—a staple of my own childhood—and The Ventures’ Christmas Album. My wife and I have started watching the Mystery Science Theater 3000 episode Santa Claus, not to be confused with Santa Claus Conquers the Martians, on an annual basis: it’s one of the best episodes that the show ever did, and I’m still tickled by it after close to a dozen viewings. (My favorite line, as Santa deploys a massive surveillance system to spy on the world’s children: “Increasingly paranoid, Santa’s obsession with security begins to hinder everyday operations.”) But my most beloved holiday mainstay is the book Santa Claus and His Elves by the cartoonist and children’s author Mauri Kunnas. If you aren’t Finnish, you probably haven’t heard of it, and readers from other countries might be momentarily bemused by its national loyalties: Santa’s workshop is explicitly located on Mount Korvatunturi in Lapland. As Kunnas writes: “So far away from human habitation is this village that no one is known to have seen it, except for a couple of old Lapps who stumbled across it by accident on their travels.”
I’ve been fascinated by this book ever since I was a child, and I was saddened when it inexplicably went missing for years, probably stashed away in a Christmas box in my parents’ garage. When my mother bought me a new copy, I was overjoyed, and as I began to read it to my own daughter, I was relieved to find that it holds up as well as always. The appeal of Kunnas’s book lies in its marvelous specificity: it treats Santa’s village as a massive industrial operation, complete with print shops, factories, and a fleet of airplanes. Santa Claus himself barely figures in the story at all. The focus is much more on the elves: where they work and sleep, their schools, their hobbies, and above all how they coordinate the immense task of tracking wish lists, making toys, and delivering presents. (Looking at Kunnas’s lovingly detailed illustrations of their warehouses and machine rooms, it’s hard not to be reminded of an Amazon fulfillment center—and although Jeff Bezos comes closer than anyone in history to realizing Santa’s workshop for real, complete with proposed deliveries by air, I’d like to think that the elves get better benefits.) As you leaf through the book, Santa’s operation starts to feel weirdly plausible, and everything from the “strong liniment” that he puts on his back to the sauna that he and the elves enjoy on their return adds up to a picture that could convince even the most skeptical adult.
The result is nothing less than a beautiful piece of speculative fiction, enriched by the tricks that all such writers use: the methodical working out of a seemingly impossible premise, governed by perfect internal logic and countless persuasive details. Kunnas pulls it off admirably. In the classic study Pilgrims Through Space and Time, J.O. Bailey has an entire section titled “Probability Devices,” in which he states: “The greatest technical problem facing the writer of science fiction is that of securing belief…The oldest and perhaps the soundest method for securing suspension of disbelief is that of embedding the strange event in realistic detail about normal, everyday events.” He continues:
[Jules] Verne, likewise, offers minute details. Five Weeks in a Balloon, for instance, figures every pound of hydrogen and every pound of air displaced by it in the filling of the balloon, lists every article packed into the car, and states every detail of date, time (to the minute), and topography.
Elsewhere, I’ve noted that this sort of careful elaboration of hardware is what allows the reader to accept the more farfetched notions that govern the story as a whole—which might be the only thing that my suspense fiction and my short science fiction have in common. Filling out the world I’ve invented with small, accurate touches might be my single favorite part of being a writer, and the availability of such material often makes the difference between a finished story and one that never leaves the conceptual stage.
And when I look back, I wonder if I might not have imbibed much of this from the Santa Claus story, and in particular from Kunnas. Santa, in a way, is one of the first exposures to speculative fiction that any child gets: it inherently strains credulity, but you can’t argue with the gifts that appear under the tree on Christmas Day, and reconciling the implausibility of that story with the concrete evidence requires a true leap of imagination. Speculating that it might be the result of an organized conspiracy of adults is, if anything, an even bigger stretch—just as maintaining secrecy about a faked moon landing for decades would have been a greater achievement than going to the moon for real. Santa Claus, oddly enough, has rarely been a popular subject in science fiction, the Robot Santa on Futurama aside. As Gary Westfahl notes in The Greenwood Encyclopedia of Science Fiction and Fantasy: “As a literature dedicated by its very nature to breaking new ground, perhaps, science fiction is not well suited as a vehicle for ancient time-honored sentiments about the virtues of love and family life. (It’s no accident that the genre’s most famous treatment of Christmas lies in the devastating ending of Arthur C. Clarke’s “The Star,” which you should read right now if you haven’t before.) But I suspect that those impulses have simply been translated into another form. Robert Anton Wilson once commented on the prevalence of the “greenish-skinned, pointy-eared man” in science fiction and folklore, and he thought they might be manifestations of the peyote god Mescalito. But I prefer to think that most writers are secretly wondering what the elves have been doing all this time…
The lost library
“The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents,” H.P. Lovecraft writes in “The Call of Cthulhu.” He continues:
We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in is own direction, have hitherto harmed us little, but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the deadly light into the peace and safety of a new dark age.
Lovecraft’s narrator would be relieved, I think, by the recent blog post by Tim Wu of The New Yorker on the sorry state of Google Books. As originally conceived, this was a project that could have had the most lasting impact of any development of the information revolution—an accurate, instantaneous search of all the books ever published, transforming every page into metadata. Instead, it became mired in a string of lawsuits, failed settlements, and legislative inaction, and it limps on as a shadow of what it might have been.
And while the result might have saved us from going mad in the Lovecraftian sense, it’s an incalculable loss to those of us who believe that we’d profit more than we’d suffer from that kind of universal interconnectedness. I don’t mean to minimize what Google has done: even in its stunted, incomplete form, this is still an amazing tool for scholars and curious characters of all kinds, and we shouldn’t take it for granted. I graduated from college a few years before comprehensive book search—initially developed by Amazon—was widely available, and when I contemplate the difference between the way I wrote my senior thesis and what would be possible now, it feels like an incomprehensible divide. It’s true that easy access to search results can be a mixed blessing: there’s a sense in which the process of digging in libraries and leafing through physical books for a clue purifies the researcher’s brain, preparing it to recognize and act upon that information when it sees it. This isn’t always the case when a search result is just one click away. But for those who have the patience to use a search as a starting point, or as a map of the territory, it’s precious beyond reckoning. Making it fully accessible should be the central intellectual project of our time. Instead, it has stalled, perhaps forever, as publishers and authors dicker over rights issues that pale in comparison to the benefits to be gained from global access to ideas.
I’m not trying to dismiss the fears of authors who are worried about the financial consequences of their work being available for free: these are valid concerns, and a solution that would wipe out any prospect of making a living from writing books—as it already threatens to do with journalism and criticism—would outweigh any possible gain. But if we just focus on books that are out of print and no longer profit any author or publisher in their present form, we’re talking about an enormous step forward. There’s no earthly reason why books that are currently impossible to buy should remain that way. Once something goes out of print, it should be fair game, at least until the copyright holder decides to do something about it. Inhibiting free access to books that can’t possibly do any good to their rights holders now, with an eye to some undefined future time when those rights might have value again, doesn’t help anybody. (If anything, a book that exists in searchable form is of greater potential value than a copy moldering unread on a library shelf.) Any solution to the problem of authors’ rights is inevitably going to be built out of countless compromises and workarounds, so we may as well approach it from a baseline of making everything accessible until we can figure out a way forward, rather than keeping these books out of sight until somebody legislates a solution. If nothing else, opening up those archives more fully would create real pressure to come up with a workable arrangement with authors. As it stands, it’s easier to do nothing.
And the fact that we’ve been waiting so long for an answer, even as Google, Amazon, and others devote their considerable resources to other forms of search, suggests that our priorities are fundamentally out of whack. Enabling a search of libraries is qualitatively different from doing the same for online content: instead of focusing solely on material that has been generated over the last few decades, and in which recent content outweighs the old by orders of magnitude, we’d be opening up the accumulated work of centuries. Not all of it is worth reading, any more than the vast majority of content produced every day deserves our time and attention, but ignoring that huge trove of information—thirty million books or more, with all their potential connections—is an act of appalling shortsightedness. A comprehensive search of books that were otherwise inaccessible, and which didn’t relegate most of the results to a snippet view for no discernible reason, would have a far greater impact on how we think, feel, and innovate than most of the technological projects that suck up money and regulatory attention. It might only appeal to a small slice of readers and researchers, but it happens to be a slice that is disproportionately likely to come up with works and ideas that affect the rest of us. But it requires a voice in its favor as loud as, or louder than, the writers and publishers who object to it. The books are there. They need to be searchable and readable. Anything else just doesn’t scan.
The Bollingen Library and the future of media
About a year ago, I began to notice that many of the books in my home library came from the same place. It all started when I realized that Kenneth Clark’s The Nude and E.H. Gombrich’s Art and Illusion—two of the most striking art books of the century—had originally been delivered as part of the A.W. Mellon Lectures in Fine Art and published by the Bollingen Library. Looking more closely, I found that the Bollingen Foundation, whatever that was, had been responsible for countless other titles that have played important roles in my life and those of other readers: Vladimir Nabokov’s massive translation and commentary of Eugene Onegin, the Richard Wilhelm edition of the I Ching, D.T. Suzuki’s Zen and Japanese Culture, Jacques Maritain’s Creative Intuition in Art and Poetry, Huntington Cairns’s extraordinary anthology The Limits of Art, and, perhaps most famously, Joseph Campbell’s The Hero With a Thousand Faces. Intrigued, I sought out more books from the Bollingen imprint, looking for used copies online and purchasing them sight unseen. So far, I’ve acquired tomes like The Survival of the Pagan Gods, The Eternal Present, The Gothic Cathedral, and The Demands of Art. Along with a shared concern with the humanities and their role in modern life, they’re all physically beautiful volumes, a delight to hold and browse through, and I hope to acquire more for as long as I can.
Which, when you think about it, is highly unusual. Most of us don’t pay much attention to the publishers of the books we buy: we may subconsciously sense that, say, the Knopf imprint is a mark of quality, but we don’t pick up a novel solely because of the borzoi logo on the spine. (The one big exception may be Taschen, which has built up a reputation for large, indecently attractive coffee table books.) Publishers would love it if we did, of course, just as television networks and movie studios would be happy if we automatically took their brands as a seal of approval. That’s rare in any medium: HBO and Disney have managed it, but not many more. So it’s worth taking a closer look at Bollingen to see how, exactly, it caught my eye. And what we discover is that Bollingen was a philanthropic enterprise, essentially an academic press without the university. It was founded in 1945 by Paul Mellon, heir to the Andrew W. Mellon fortune, as a tribute to his late wife, a devotee of Carl Jung, and while it initially focused on Jungian studies—it was named after Jung’s famous tower and country home in Switzerland—it gradually expanded into a grander project centered on the interconnectedness and relevance of art, history, literature, and psychology. As names like Gombrich and Clark indicate, it arose out of much the same circle as the Warburg Institute in London, which was recently the subject of a loving profile by Adam Gopnik in The New Yorker, but with far greater resources, patronage, and financial support.
In the end, after publishing hundreds of books, sponsoring lectures, and awarding generous stipends to the likes of Marianne Moore and Alexis Leger, the foundation discontinued operations in 1968, noting that the generation it had served was yielding to another set of concerns. And while it may not seem to have much relevance to the problem of media brands today, it offers some surprising lessons. Bollingen started as an act of philanthropy, without any expectation of profit, and arose out of a highly focused, idiosyncratic vision: these were simply books that Mellon and his editors wanted to see, and they trusted that they would find an appreciative audience over time. Which, in many respects, is still how meaningful brands are created or sustained. Matthew Yglesias once referred to Amazon as “a charitable organization being run by elements of the investment community for the benefit of consumers,” and although he was being facetious, he had a point. It’s easy to make fun of startup companies that are obsessed with eyeballs, rather than sustainable profits, as venture capitalist Chris Sacca put it on Alex Blumberg’s Startup podcast:
That’s usually a bad move for an early-stage company—to get cash-flow positive. I have strong opinions about that. Everyone I know who pushes for cash-flow positivity that early stops growing at the rate they should be growing, and gets so anchored by this idea that “we need to keep making money.”
Sacca concludes that you don’t want a “lifestyle business”—that is, a business growing at a pace where you get to take vacations—and that growth for its own sake should be pursued at all costs. And it’s a philosophy that has resulted, infamously, in countless “hot” tech companies that are years, if not a lifetime, away from profitability.
But I think Sacca is half right, and despite the obvious disparity in ideals, he all but circles back around to the impulse behind Bollingen. Venture investors don’t have any desire to run a charitable enterprise, but they end up doing so anyway, at least for the years in which a company is growing, because that’s how brands are made. Someone’s money has to be sacrificed to lay the foundations for anything lasting, both because of the timelines involved and because it’s the only way to avoid the kind of premature compromise that can turn off potential users or readers. We’re living in an age when such investments are more likely to take the form of startup capital than charitable largess, but the principle is fundamentally the same. It’s the kind of approach that can’t survive a short-term obsession with advertisers or page views, and it requires patrons with deep pockets, a tolerance for idiosyncrasy, an eye for quality, and a modicum of patience. (In journalism, the result might look a lot like The Distance, a publication in whose success I have a considerable personal stake.) More realistically, it may take the form of a prestigious but money-losing division within a larger company, like Buzzfeed’s investigative pieces or most of the major movie studios. The reward, as Yglesias puts it, is a claim on “a mighty engine of consumer surplus”—and if we replace “consumer” with “cultural,” we get something very much like the Bollingen Foundation. Bollingen wasn’t interested in growth in itself, but in influencing the entire culture, and in at least one book, The Hero With a Thousand Faces, it went viral to an extent that makes even the most widely shared article seem lame. Like Jung’s tower, it was made for its own sake. And its legacy still endures.
The long tail of everything
Yesterday, I noted that we’re living in a golden age for podcasting, and it isn’t hard to see why. If there’s a sweet spot for production and distribution, we’re in it right now: with the available software and recording tools, it’s easier than ever to put together a podcast at minimal cost, and just about every potential listener owns a laptop or mobile device capable of streaming this kind of content. Many of us are more likely to spend an hour listening to a show online than reading an article that takes the same amount of time to finish, perhaps because we can do it while driving or washing the dishes, or perhaps because the ratio of effort to entertainment seems more attractive. And the podcasts themselves—at least the ones that break through to draw a large audience—are better than ever. No matter what your interests are, there’s probably a show just for you, whether it’s a retrospective commentary on every episode of The X-Files or interviews with obscure supporting actors or advice on your career in customer support. And if you can’t find the podcast of your dreams, there’s nothing stopping you from jumping in and making it yourself.
Of course, in reality, the universe of podcasting looks like most other forms of creative expression: a long tail with a few big blockbusters at one end and thousands of niche offerings at the other. Serial may rack up five million downloads on iTunes, but it’s the outlier of outliers, and efforts by media companies to create “the next Serial” have about as much a chance of succeeding as the looming attempt by movie studios to make the next American Sniper. Both are going to inspire a lot of imitators, but few, if any, are likely to recapture the intangible qualities that made either such a success. In the meantime, for most podcasters, the medium doesn’t make for a viable day job—which only means that it’s like every other medium ever. Its relative novelty and low barriers to entry make it alluring in the same way that blogging or self-publishing once were, but it doesn’t make doing it for a living any easier. It simply creates another long tail parallel to, or embedded within, the traditional one, with a handful of breakout hits holding out the promise of success for the rest. Getting in on the rightmost side of the curve has never been simpler, but making it to the left is as hard as ever.
This is part of the reason why I’ve never been attracted to self-publishing, which looks at first like a way of circumventing the gatekeepers who are keeping your book out of stores, but really only pushes the same set of challenges a little further down the line. And the long tail is as close to a constant as we’ll ever see in any creative field, no matter how the marketplace changes. (Or, to put it another way, distribution won’t change the distribution.) It exists for the same reason crack dealers are willing to work for less than minimum wage: lured in by the promise of outsized success or recognition, people will spend years slaving away in community theater or writing short stories for nothing, when a job that offered less enticing rewards would have lost their interest long ago. Going in, we’re all irrational optimists, and we’ll always be more likely to compare ourselves to the few famous names we recognize while ignoring the invisible tail end. On the individual level, it’s a flaw of reasoning, but it’s also essential for keeping the whole enterprise alive. Without that subterranean world of aspiring artists who are basically paying for the privilege of doing the work they love, nothing big would ever emerge.
And each part of the curve depends on every other. It’s often been said that blockbusters are what make the rest possible, whether it’s Buzzfeed financing serious journalism with listicles about cats or The Hunger Games enabling Lionsgate to release the twenty smaller movies it distributes each year. And it’s equally hard to imagine anyone trying to make art for a living without the psychological incentive that the outliers provide. Yet the long tail is also what props up the success stories, and not just for companies, like Amazon, that bake it into their business models. On a cultural level, art is a matter of statistics because its underlying factors are so unpredictable: a masterpiece or great popular entertainment is so unlikely to emerge out of pure calculation that we have no choice but to entrust it to chance and large numbers. The odds of a given work of art breaking out are so low that our best bet is to increase the pool of candidates, even if any individual player operates at a net loss. That may not be much consolation to the writer or podcaster whose sphere of concern is limited to his or her life. But that’s a long tale of its own.
A father’s case for physical books
Over the weekend, I brought my daughter Beatrix to her first bookstore, the Book Table in Oak Park, which is arguably the best independent bookshop in the Chicago area. I love it, first of all, because they keep plenty of my own novels in stock, but also because their selection is fascinating and thoughtfully curated. Every table is covered in modestly discounted copies of new releases, many of which I’d never seen before, with an emphasis on art, design, and books from speciality publishers like Taschen and NYRB Classics. I never leave without making a few wonderful discoveries—or at least adding some potential items to my holiday wish list—and I always emerge with a newfound appreciation of the social importance of independent bookstores. Jason, the owner, has been a good friend and supporter, and I was perfectly honest when I told him that I expect to bring Beatrix back for years to come.
Yet the visit also got me thinking about the role that books will play both in my daughter’s life and in the lives of other children the same age. Bookstores, as we all know, are disappearing across the country; so, too, are bookshelves in private homes, as readers increasingly begin to rely on devices like the Kindle. I’m not against electronic books in any way, and they’re clearly a great option for a lot of adult readers. But I think there’s a risk here. As I’ve said elsewhere, I owe much of my education and my love of reading to scrounging for books on my own parents’ bookshelves. These weren’t books that I was asked, or even permitted, to read; they were simply there, lined up alluringly, and it was only a matter of time before I was reading well over my head. Now, however, we’re looking at the prospect of a generation of children raised in the households of parents who may love reading, but lack an environment of physical books that kids can discover on their own. And I’m concerned about this.
I’ve spoken before about the end of browsing, in which astonishing online resources can give us instant access to the exact book we want, but aren’t nearly as good at giving us books we never knew we needed. For adults, recommendations and social networks go part of the way toward solving the problem, but they aren’t a perfect answer. Time and again, they tend to return to the same handful of established classics or recent books—nearly every reading thread on Reddit seems to center on Vonnegut, Infinite Jest, or House of Leaves—and they rarely find time for the neglected, the unfairly forgotten, or the out of print. It’s an even greater problem for children, who tend to be steered toward approved or required reading, and lack the resources to seek out other books on their own. The tricky thing about buying books for kids is that you never quite know when they’ll make the next big leap. Usually, it happens on its own. And the first step, at least for me, was rummaging unsupervised through an adult bookshelf.
In my case, I’m not too worried about Beatrix, who will inevitably grow up in a house crammed with books, and who has a father who will probably be delighted the first time he catches her reading George Orwell or Stephen King. But I’m still of the mind that the decline of printed books in many homes has consequences that can’t be entirely addressed by reading aloud or stocking the house with books for kids. A Kindle is a beautiful thing, but it doesn’t evoke the same kind of curiosity—or access to randomness—that a fully stocked bookshelf can, and it can’t compete with other kinds of screens. One solution, of course, is to bring children to bookstores or libraries and just let them wander: the moment I first ventured into the grownup section of my hometown library is still one of my most exciting memories. But the best answer is also the simplest one: to keep buying physical books, not for your children, but for yourself.
What I learned from my first novel
Five months ago, my novel The Icon Thief was published by Penguin, and if it seemed at the time like the end of a journey, I see clearly now that it was just the beginning of another. In many ways, the most challenging part of the past year has been adjusting my survival skills as a writer, which had been built up by years of mostly solitary work, to the realities of living with a book in actual stores. And the transition hasn’t always been easy. Daniel Kahneman, the Nobel Prize-winning author of Thinking, Fast and Slow, likes to talk about optimistic bias—the delusion that we ourselves are more likely to succeed where countless others have failed—and it’s especially endemic among aspiring writers, who are required by definition to be irrationally optimistic. Every unpublished novel is a potential bestseller, just as every unwritten page is a potential masterpiece, and learning to live with a real physical book, which won’t always live up to your expectations, is something every writer needs to learn. Here, then, are some lessons that the past few months have taught me:
1. Promotion is great, but placement is better. When The Icon Thief came out, I did everything I could to transform myself from an obsessive introvert, which is basically what every writer has to become in order to finish a book in the first place, to a tireless promoter who could sell his book in person, in print, and in all other media. What I’ve since learned is that while such activities can be gratifying for their own sake, and will sell books here and there, they generally don’t have a lasting effect on a novel’s success. What sells most books, aside from word of mouth, is placement: do readers see the book when they go into stores? Every instance of placement in the big national chains—whether a book is on the front table, in the new releases section, or in a display where browsers are likely to notice it—is a chance to reach that precious audience of readers who are actively looking for something to buy. It’s by far the largest factor in a debut novel’s early sales—more than advertising, more than promotion. And it’s something that is ultimately out of the writer’s hands.
2. Don’t sweat the numbers. During the first week of my novel’s release, like any writer with a pulse, I was checking my Amazon sales ranking every hour. After a while, I was down to every day, then every week, and now I look only rarely, if ever. The same goes with BookScan figures and other measures of the book’s sales: I used to dutifully look over the charts every Friday and wonder why sales were spiking in Houston but flat in Boise, Idaho. In time, though, I found that I was falling into the same trap of those who have plenty of data but not enough real information: I was reading too much into tiny fluctuations and seeing patterns that weren’t there. In the end, such noise only serves as a distraction from the real business of writing, which involves a lot of diligent labor without reference to how your book is doing in Baton Rouge. In the old days, writers would receive sales figures from their publishers on a quarterly or semiannual basis, and I’d argue that they were better off. Turn off the numbers—you’ll be happier in the end.
3. Play the long game. Last month, I learned that my longtime editor at Penguin, who had acquired The Icon Thief and its sequel almost two years ago, was leaving to take another job. At first, I was rocked by the news, but my agent wisely pointed out that the timing here—with one book already out in stores, the second locked and ready to go, and a third a few months from completion—was about as good as it could get, and that changing editors is something that happens to every writer at one point or another. And he was right. Unless you’re the kind of author who has exactly one book to write, you’re going spend the rest of your career in the writing game, which is just like anything else in life: the same ups and downs happen to everyone, but not necessarily in the same order. When you take the long view, you find that the rules of engagement haven’t really changed from when you were first starting out: you’re still writing for yourself and a few ideal readers. And the more you keep that in mind, the better chance you have of coming out the other end alive.
Are bookstores necessary?
Earlier this week, Slate’s Farhad Majoo published an essay, in response to Richard Russo’s recent piece in the New York Times, on why it makes more sense for readers to buy books on Amazon.com, rather than local bookstores. Manjoo makes a lot of sound points—Amazon offers better prices and a much wider range of choices, meaning that you can buy two good books for the price of one at an ordinary bookshop—and I don’t intend to try and refute him here. (Plenty of others have done so already.) But as much as I love my Amazon Prime, his article still rubbed me the wrong way. Ultimately, I think I’m irritated by his assumption, which he presents without any particular scrutiny, that shopping in bookstores is an inherently irrational act, like voting or visiting an ashram, that people do just because it makes them feel good. It’s this paragraph, in particular, that annoyed me:
I get that some people like bookstores, and they’re willing to pay extra to shop there. They find browsing through physical books to be a meditative experience, and they enjoy some of the ancillary benefits of physicality (authors’ readings, unlimited magazine browsing, in-store coffee shops, the warm couches that you can curl into on a cold day). And that’s fine: In the same way that I sometimes wander into Whole Foods for the luxurious experience of buying fancy food, I don’t begrudge bookstore devotees spending extra to get an experience they fancy.
As someone who loves going to bookstores more than just about anything else in the world, I’m irked by this condescending tone, which implies that bookstore browsing is a quirk Manjoo is willing to tolerate in others—like being a LARPer, say—but secretly finds faintly absurd. As what Majoo might term a “bookstore cultist,” I can testify that browsing isn’t just something I “fancy”: it’s an essential part of being an intellectually curious person. For those of us who depend on new ideas for a living, there’s a definite utility to browsing among physical books, to the point where the failure to browse even puts us at a disadvantage. Speaking for myself, I’ve learned countless things while browsing that I never would have found in any other way: a small but crucial subplot in The Icon Thief, for instance, revolving around the Black Dahlia murder, was inspired by a random discovery in a half-price bookshop. Bookstores and libraries are simply the best places in the world to think and dream. And I can’t do that online.
Some of Majoo’s other points fail to hold water as well. “If you don’t choose your movies based on what the guy at the box office recommends,” Manjoo asks, “why would you choose your books that way?” This conveniently overlooks a couple of facts. First, the universe of books is far wider than those of movies in wide release, so a personal recommendation does carry some weight, as it once did at video stores. Second, and more importantly, we do choose our movies based on what theater owners recommend, albeit indirectly—the movies playing at my local art house theater are only a small subset of the independent or specialty titles out for release at any given time, and have been invisibly curated for us before we even set foot inside. This kind of curating, for better or worse, is also what good independent bookstores do. When I visit the Book Table in Oak Park, for instance, or the Book Cellar in Lincoln Square, I’m guaranteed to see something interesting—like the new edition of Pale Fire, say—that I never would have found on my own.
Of course, I’ve made even more serendipitous discoveries at used bookstores, or the Strand dollar bin, implying that the best curator of all is random chance—and in a form that has no economic advantage whatsoever for the authors involved. Even worse, when I see something interesting at a local bookstore, I tend to do exactly what foes of Amazon’s Price Check promotion have complained about: I’ll check the prices available elsewhere, usually on my phone, and ultimately buy it online or get it from the library. As a reader and browser, then, I’m a mercenary: I’ll browse in one place and buy in another, or buy a used copy that doesn’t benefit the author at all. Obviously, I have mixed feelings about this, and the occasional purchase of a new book at a local bookstore doesn’t do much to assuage my guilt. My only hope, as a writer and browser, is that there are enough irrational book lovers of the type Manjoo derides to keep these bookstores alive. Without them, an intangible but real part of our culture will be lost. It has nothing to do with economics. But it’s very rational indeed.
Googling the rise and fall of literary reputations: the sequel
After playing over the weekend with the new word frequency tool in Google Books, I quickly came to realize that last week’s post barely scratched the surface. It’s fun to compare novelists against other writers in the same category, for example, but what happens when we look at authors in different categories altogether? Here’s what we get, for instance, when we chart two of the most famous literary authors of the latter half of the century against their counterparts on the bestseller list:
The results may seem surprising at first, but they aren’t hard to understand. Books by Philip Roth and John Updike might be outsold by Harold Robbins and Jacqueline Susann in their initial run (the occasional freak like Couples or Portnoy’s Complaint aside), but as they enter the canon, they’re simply talked about more often, by other writers, than their bestselling contemporaries. (Robbins and Susann, by contrast, probably aren’t cited very often outside their own books.) Compared to the trajectory of a canonical author, the graph of a bestseller begins to look less like a mountain and more like a molehill—or a speed bump. But now look here:
Something else altogether seems to be at work in this chart, and it’s only a reminder of the singularity of Stephen King’s career. Soon after his debut—Carrie, ‘Salem’s Lot, The Shining, and The Stand were all published within the same five years—King had overtaken the likes of Robbins and Susann both on the bestseller lists and in terms of cultural impact. Then something even stranger happened: he became canonical. He was prolific, popular, and wrote books that were endlessly referenced within the culture. As a result, his graph looks like no other—an appropriately monstrous hybrid of the bestselling author and serious novelist.
So what happens when we extend the graph beyond the year 2000, which is where the original numbers end? Here’s what we see:
A number of interesting things begin to happen in the last decade. Robbins and Susann look more like speed bumps than ever before. King’s popularity begins to taper off just as he becomes officially canonical—right when he receives lifetime achievement honors from the National Book Awards. And Roth and Updike seem to have switched places in 2004, or just after the appearance of The Plot Against America, which marks the peak, so far, of Roth’s late resurgence.
Of course, the conclusions I’ve drawn here are almost certainly flawed. There’s no way of knowing, at least not without looking more closely at the underlying data, whether the number of citations of a given author reflects true cultural prominence or something else. And it’s even harder to correlate any apparent patterns—if they’re actually there at all—with particular works or historical events, especially given the lag time of the publishing process. But there’s one chart, which I’ve been saving for last, which is so striking that I can’t help but believe that it represents something real:
This is a chart of the novelists who, according to a recent New York Times poll, wrote the five best American novels of the past twenty-five years: Toni Morrison (Beloved), Don DeLillo (Underworld), John Updike (Rabbit Angstrom), Cormac McCarthy (Blood Meridian), and Philip Roth (American Pastoral). The big news here, obviously, is Morrison’s amazing ascent around 1987, when Beloved was published. It isn’t hard to see why: Beloved was the perfect storm of literary fiction, a bestselling, critically acclaimed novel that also fit beautifully into the college curriculum. Morrison’s decline in recent years has less to do, I expect, with any real fall in her reputation than with a natural settling to more typical levels. (Although it’s interesting to note that the drop occurs shortly after Morrison received the Nobel Prize, thus locking her into the canon. Whether or not this drop is typical of officially canonized authors is something I hope to explore in a later post.)
It might be argued, and rightly so, that it’s unfair to turn literary reputation into such a horse race. But such numbers are going to be an inevitable part of the conversation from now on, and not just in terms of citations. It’s appropriate that Google unveiled this new search tool just as Amazon announced that it was making BookScan sales numbers available to its authors, allowing individual writers to do what I’m doing here, on a smaller and more personal scale. And if there’s any silver lining, it’s this: as the cases of Robbins and Susann remind us, in the end, sales don’t matter. After all, looking at the examples given above, which of these graphs would you want?
“And what does that name have to do with this?”
with 2 comments
Note: This post is the thirtieth installment in my author’s commentary for Eternal Empire, covering Chapter 29. You can read the previous installments here.
Earlier this week, in response to a devastating article in the New York Times on the allegedly crushing work environment in Amazon’s corporate offices, Jeff Bezos sent an email to employees that included the following statement:
Predictably, the email resulted in numerous headlines along the lines of “Jeff Bezos to Employees: You Don’t Work in a Dystopian Hellscape, Do You?” Bezos, a very smart guy, should have seen it coming. As Richard Nixon learned a long time ago, whenever you tell people that you aren’t a crook, you’re really raising the possibility that you might be. If you’re concerned about the names that your critics might call you, the last thing you want to do is put words in their mouths—it’s why public relations experts advise their clients to avoid negative language, even in the form of a denial—and saying that Amazon isn’t a soulless, dystopian workplace is a little like asking us not to think of an elephant.
Writers have recognized the negative power of certain loaded terms for a long time, and many works of art go out of their way to avoid such words, even if they’re central to the story. One of my favorite examples is the film version of The Girl With the Dragon Tattoo. Coming off Seven and Zodiac, David Fincher didn’t want to be pigeonholed as a director of serial killer movies, so the dialogue exclusively uses the term “serial murderer,” although it’s doubtful how effective this was. Along the same lines, Christopher Nolan’s superhero movies are notably averse to calling their characters by their most famous names: The Dark Knight Rises never uses the name “Catwoman,” while Man of Steel, which Nolan produced, avoids “Superman,” perhaps following the example of Frank Miller’s The Dark Knight Returns, which indulges in similar circumlocutions. Robert Towne’s script for Greystoke never calls its central character “Tarzan,” and The Walking Dead uses just about every imaginable term for its creatures aside from “zombie,” for reasons that creator Robert Kirkman explains:
Kirkman’s reluctance to call anything a zombie, which has inspired an entire page on TV Tropes dedicated to similar examples, is particularly revealing. A zombie movie can’t use that word because an invasion of the undead needs to feel like something unprecedented, and falling back on a term we know conjures up all kinds of pop cultural connotations that an original take might prefer to avoid. In many cases, avoiding particular words subtly encourages us treat the story on its own terms. In The Godfather, the term “Mafia” is never uttered—an aversion, incidentally, not shared by the original novel, the working title of which was actually Mafia. This quietly allows us to judge the Corleones according to the rules of their own closed world, and it circumvents any real reflection about what the family business actually involves. (According to one famous story, the mobster Joseph Colombo paid a visit to producer Al Ruddy, demanding that the word be struck from the script as a condition for allowing the movie to continue. Ruddy, who knew that the screenplay only used the word once, promptly agreed.) The Godfather Part II is largely devoted to blowing up the first movie’s assumptions, and when the word “Mafia” is uttered at a senate hearing, it feels like the real world intruding on a comfortable fantasy. And the moment wouldn’t be as effective if the first installment hadn’t been as diligent about avoiding the term, allowing it to build a new myth in its place.
While writing Eternal Empire, I found myself confronting a similar problem. In this case, the offending word was “Shambhala.” As I’ve noted before, I decided early on that the third novel in the series would center on the Shambhala myth, a choice I made as soon as I stumbled across an excerpt from Rachel Polonsky’s Molotov’s Magic Lantern, in which she states that Vladimir Putin had taken a particular interest in the legend. A little research, notably in Andrei Znamenski’s Red Shambhala, confirmed that the periodic attempts by Russia to confirm the existence of that mythical kingdom, carried out in an atmosphere of espionage and spycraft in Central Asia, was a rich vein of material. The trouble was that the word “Shambhala” itself was so loaded with New Age connotations that I’d have trouble digging my way out from under it: a quick search online reveals that it’s the name of a string of meditation centers, a music festival, and a spa with its own line of massage oils, none of which is exactly in keeping with the tone that I was trying to evoke. My solution, predictably, was to structure the whole plot around the myth of Shambhala while mentioning it as little as possible: the name appears perhaps thirty times across four hundred pages. (The mythological history of Shambhala is treated barely at all, and most of the references occur in discussions of the real attempts by Russian intelligence to discover it.) The bulk of those references appear here, in Chapter 29, and I cut them all down as much as possible, focusing on the bare minimum I needed for Maddy to pique Tarkovsky’s interest. I probably could have cut them even further. But as it stands, it’s more or less enough to get the story to where it needs to be. And it doesn’t need to be any longer than it is…
Like this:
Written by nevalalee
August 20, 2015 at 9:52 am
Posted in Books, Writing
Tagged with Amazon, Andrei Znamenski, Christopher Nolan, Eternal Empire commentary, Frank Miller, Greystoke, Jeff Bezos, Man of Steel, Molotov's Magic Lantern, Rachel Polonsky, Red Shambhala, Robert Kirkman, Robert Towne, The Dark Knight Returns, The Dark Knight Rises, The Girl With the Dragon Tattoo, The Godfather, The Godfather Part II, The Walking Dead