Posts Tagged ‘Sherlock Holmes’
A few days ago, Bob Mankoff, the cartoon editor of The New Yorker, devoted his weekly email newsletter to the subject of “The Great Clichés.” A cliché, as Mankoff defines it, is a restricted comic situation “that would be incomprehensible if the other versions had not first appeared,” and he provides a list of examples that should ring bells for all readers of the magazine, from the ubiquitous “desert island” to “The-End-Is-Nigh Guy.” Here are a few of my favorites:
Atlas holding up the world; big fish eating little fish; burglars in masks; cave paintings; chalk outline at crime scene; crawling through desert; galley slaves; guru on mountain; mobsters and victim with cement shoes; man in stocks; police lineup; two guys in horse costume.
Inevitably, Mankoff’s list includes a few questionable choices, while also omitting what seem like obvious contenders. (Why “metal detector,” but not “Adam and Eve?”) But it’s still something that writers of all kinds will want to clip and save. Mankoff doesn’t make the point explicitly, but most gag artists probably keep a similar list of clichés as a starting point for ideas, as we read in Mort Gerberg’s excellent book Cartooning:
List familiar situations—clichés. You might break them down into categories, like domestic (couple at breakfast, couple watching television); business (boss berating employee, secretary taking dictation); historic (Paul Revere’s ride, Washington crossing the Delaware); even famous cartoon clichés (the desert island, the Indian snake charmer)…Then change something a little bit.
As it happened, when I saw Mankoff’s newsletter, I had already been thinking about a far more harmful kind of comedy cliché. Last week, Kal Penn went on Twitter to post some of the scripts from his years auditioning as a struggling actor, and they amount to an alternative list of clichés kept by bad comedy writers, consciously or otherwise: “Gandhi lookalike,” “snake charmer,” “foreign student.” One character has a “slight Hindi accent,” another is a “Pakistani computer geek who dresses like Beck and is in a perpetual state of perspiration,” while a third delivers dialogue that is “peppered with Indian cultural references…[His] idiomatic conversation is hit and miss.” A typical one-liner: “We are propagating like flies on elephant dung.” One script describes a South Asian character’s “spastic techno pop moves,” with Penn adding that “the big joke was an accent and too much cologne.” (It recalls the Morrissey song “Bengali in Platforms,” which included the notorious line: “Life is hard enough when you belong here.” You could amend it to read: “Being a comedy writer is hard enough when you belong here.”) Penn closes by praising shows with writers “who didn’t have to use external things to mask subpar writing,” which cuts to the real issue here. The real person in “a perpetual state of perspiration” isn’t the character, but the scriptwriter. Reading the teleplay for an awful sitcom is a deadening experience in itself, but it’s even more depressing to realize that in most cases, the writer is falling back on a stereotype to cover up the desperate unfunniness of the writing. When Penn once asked if he could play a role without an accent, in order to “make it funny on the merits,” he was told that he couldn’t, probably because everybody else knew that the merits were nonexistent.
So why is one list harmless and the other one toxic? In part, it’s because we’ve caught them at different stages of evolution. The list of comedy conventions that we find acceptable is constantly being culled and refined, and certain art forms are slightly in advance of the others. Because of its cultural position, The New Yorker is particularly subject to outside pressures, as it learned a decade ago with its Obama terrorist cover—which demonstrated that there are jokes and images that aren’t acceptable even if the magazine’s attitude is clear. Turn back the clock, and Mankoff’s list would include conventions that probably wouldn’t fly today. Gerberg’s list, like Penn’s, includes “snake charmer,” which Mankoff omits, and he leaves out “Cowboys and Indians,” a cartoon perennial that seems to be disappearing. And it can be hard to reconstruct this history, because the offenders tend to be consigned to the memory hole. When you read a lot of old magazine fiction, as I do, you inevitably find racist stereotypes that would be utterly unthinkable today, but most of the stories in which they appear have long since been forgotten. (One exception, unfortunately, is the Sherlock Holmes short story “The Adventure of the Three Gables,” which opens with a horrifying racial caricature that most Holmes fans must wish didn’t exist.) If we don’t see such figures as often today, it isn’t necessarily because we’ve become more enlightened, but because we’ve collectively agreed to remove certain figures from the catalog of stock comedy characters, while papering over their use in the past. A list of clichés is a snapshot of a culture’s inner life, and we don’t always like what it says. The demeaning parts still offered to Penn and actors of similar backgrounds have survived for longer than they should have, but sitcoms that trade in such stereotypes will be unwatchable in a decade or two, if they haven’t already been consigned to oblivion.
Of course, most comedy writers aren’t thinking in terms of decades, but about getting through the next five minutes. And these stereotypes endure precisely because they’re seen as useful, in a shallow, short-term kind of way. There’s a reason why such caricatures are more visible in comedy than in drama: comedy is simply harder to write, but we always want more of it, so it’s inevitable that writers on a deadline will fall back on lazy conventions. The really insidious thing about these clichés is that they sort of work, at least to the extent of being approved by a producer without raising any red flags. Any laughter that they inspire is the equivalent of empty calories, but they persist because they fill a cynical need. As Penn points out, most writers wouldn’t bother with them at all if they could come up with something better. Stereotypes, like all clichés, are a kind of fallback option, a cheap trick that you deploy if you need a laugh and can’t think of another way to get one. Clichés can be a precious commodity, and all writers resort to them occasionally. They’re particularly valuable for gag cartoonists, who can’t rely on a good idea from last week to fill the blank space on the page—they’ve got to produce, and sometimes that means yet another variation on an old theme. But there’s a big difference between “Two guys in a horse costume” and “Gandhi lookalike.” Being able to make that distinction isn’t a matter of political correctness, but of craft. The real solution is to teach people to be better writers, so that they won’t even be tempted to resort to such tired solutions. This might seem like a daunting task, but in fact, it happens all the time. A cliché factory operates on the principle of supply and demand. And it shuts down as soon as people no longer find it funny.
In A Study in Scarlet, the first Sherlock Holmes adventure, there’s a celebrated passage in which Watson tries to figure out his mystifying roommate. At this point in their relationship, he doesn’t even know what Holmes does for a living, and he’s bewildered by the gaps in his new friend’s knowledge, such as his ignorance of the Copernican model of the solar system. When Watson informs him that the earth goes around the sun, Holmes says: “Now that I do know it, I shall do my best to forget it.” He tells Watson that the human brain is like “a little empty attic,” and that it’s a mistake to assume that the room has elastic walls, concluding: “If we went round the moon it would not make a pennyworth of difference to me or to my work.” In fact, it’s clear that he’s gently pulling Watson’s leg: Holmes certainly shows plenty of practical astronomical knowledge in stories like “The Musgrave Ritual,” and he later refers casually to making “allowance for the personal equation, as the astronomers put it.” At the time, Watson wasn’t in on the joke, and he took it all at face value when he made his famous list of Holmes’s limitations. Knowledge of literature, philosophy, and astronomy was estimated as “nil,” while botany was “variable,” geology was “practical, but limited,” chemistry was “profound,” and anatomy—in an expression that I’ve always loved—was “accurate, but unsystematic.”
But the evaluation that has probably inspired the most commentary is “Knowledge of Politics—Feeble.” Ever since, commentators have striven mightily to reconcile this with their conception of Holmes, which usually means forcing him into the image of their own politics. In Sherlock Holmes: Fact or Fiction?, T.S. Blakeney observes that Holmes takes no interest, in “The Bruce-Partington Plans,” in “the news of a revolution, of a possible war, and of an impending change of government,” and he concludes:
It is hard to believe that Holmes, who had so close a grip on realities, could ever have taken much interest in the pettiness of party politics, nor could so strong an individualist have anything but contempt for the equalitarian ideals of much modern sociological theory.
S.C. Roberts, in “The Personality of Sherlock Holmes,” objected to the latter point, arguing that Holmes’s speech in “The Naval Treaty” on English boarding schools—“Capsules with hundreds of bright little seeds in each, out of which will spring the wiser, better England of the future”—is an expression of Victorian liberalism at its finest. Roberts writes:
It is perfectly true that the clash of political opinions and of political parties does not seem to have aroused great interest in Holmes’s mind. But, fundamentally, there can be no doubt that Holmes believed in democracy and progress.
In reality, Holmes’s politics are far from a mystery. As the descendant of “country squires,” he rarely displayed anything less than a High Tory respect for the rights of landed gentry, and he remained loyal to the end to Queen Victoria, the “certain gracious lady in whose interests he had once been fortunate enough to carry out a small commission.” He was obviously an individualist in his personal habits, in the venerable tradition of British eccentrics, which doesn’t mean that his political views—as some have contended—were essentially libertarian. Holmes had a very low regard for the freedom of action of the average human being, and with good reason. The entire series was predicated on the notion that men and women are totally predictable, moving within their established courses so reliably that a trained detective can see into the past and forecast the future. As someone once noted, Holmes’s deductions are based on a chain of perfectly logical inferences that would have been spoiled by a single mistake on the part of the murderer. Holmes didn’t particularly want the world to change, because it was the familiar canvas on which he practiced his art. (His brother Mycroft, after all, was the British government.) The only individuals who break out of the pattern are criminals, and even then, it’s a temporary disruption. You could say that the entire mystery genre is inherently conservative: it’s all about the restoration of order, and in the case of Holmes, it means the order of a world, in Vincent Starrett’s words, “where it is always 1895.”
I love Sherlock Holmes, and in a large part, it’s the nostalgia for that era—especially by those who never had to live with it or its consequences—that makes the stories so appealing. But it’s worth remembering what life was really like at the end of the nineteenth century for those who weren’t as fortunate. (Arthur Conan Doyle identified, incidentally, as a Liberal Unionist, a forgotten political party that was so muddled in its views that it inspired a joke in The Importance of Being Earnest: “What are your politics?” “Well, I am afraid I really have none. I am a Liberal Unionist.” And there’s no question that Conan Doyle believed wholeheartedly in the British Empire and all it represented.) Over the last few months, there have been times when I’ve thought approvingly of what Whitfield J. Bell says in “Holmes and History”:
Holmes’s knowledge of politics was anything but weak or partial. Of the hurly-burly of the machines, the petty trade for office and advantage, it is perhaps true that Holmes knew little. But of politics on the highest level, in the grand manner, particularly international politics, no one was better informed.
I can barely stand to look at a newspaper these days, so it’s tempting to take a page from Holmes and ignore “the petty trade for office and advantage.” And I often do. But deep down, it implies an acceptance of the way things are now. And it seems a little feeble.
“Original discoveries cannot be made casually, not by anyone at any time or anywhere,” the great biologist Edward O. Wilson writes in Letters to a Young Scientist. “The frontier of scientific knowledge, often referred to as the cutting edge, is reached with maps drawn by earlier scientists…Somewhere in these vast unexplored regions you should settle.” This seems like pretty good career advice for scientists and artists alike. But then Wilson makes a striking observation:
But, you may well ask, isn’t the cutting edge a place only for geniuses? No, fortunately. Work accomplished on the frontier defines genius, not just getting there. In fact, both accomplishments along the frontier and the final eureka moment are achieved more by entrepreneurship and hard work than by native intelligence. This is so much the case that in most fields most of the time, extreme brightness may be a detriment. It has occurred to me, after meeting so many successful researchers in so many disciplines, that the ideal scientist is smart only to an intermediate degree: bright enough to see what can be done but not so bright as to become bored doing it.
At first glance, this may not seem all that different from Martin A. Schwartz’s thoughts on the importance of stupidity, which I quoted here last week. In fact, they’re two separate observations—although they turn out to be related in one important respect. Schwartz is talking about “absolute stupidity,” or our collective ignorance in the face of the unknown, and he takes pains to distinguish it from the “relative stupidity” that differentiates students in the same college classes. And while Wilson isn’t talking about relative stupidity here, exactly, he’s certainly discussing relative intelligence, or the idea that the best scientists might be just a little bit less bright than their smartest peers in school. As he goes on to observe:
What, then, of certified geniuses whose IQs exceed 140, and are as high as 180 or more? Aren’t they the ones who produce the new groundbreaking ideas? I’m sure some do very well in science, but let me suggest that perhaps, instead, many of the IQ-brightest join societies like Mensa and work as auditors and tax consultants. Why should the rule of optimum medium brightness hold? (And I admit this perception of mine is only speculative.) One reason could be that IQ geniuses have it too easy in their early training. They don’t have to sweat the science courses they take in college. They find little reward in the necessarily tedious chores of data-gathering and analysis. They choose not to take the hard roads to the frontier, over which the rest of us, the lesser intellectual toilers, must travel.
In other words, the real geniuses are reluctant to take on the voluntary stupidity that science demands, and they’re more likely to find sources of satisfaction that don’t require them to constantly confront their own ignorance. This is a vast generalization, of course, but it seems to square with experience. I’ve met a number of geniuses, and what many of them have in common is a highly pragmatic determination to make life as pleasant for themselves as possible. Any other decision, in fact, would call their genius into doubt. If you can rely unthinkingly on your natural intelligence to succeed in a socially acceptable profession, or to minimize the amount of work you have to do at all, you don’t have to be a genius to see that this is a pretty good deal. The fact that Marilyn vos Savant—who allegedly had the highest tested intelligence ever recorded—became a columnist for Parade might be taken as a knock against her genius, but really, it’s the most convincing proof of it that I can imagine. The world’s smartest person should be more than happy to take a cushy gig at a Sunday supplement magazine. Most of the very bright endure their share of miseries during childhood, and their reward, rather than more misery, might as well be an adult life that provides intellectual stimulation in emotional safety. This is why I’ve always felt that Mycroft Holmes, Sherlock’s smarter older brother, knew exactly how his genius ought to be used. As Sherlock notes drily in “The Adventure of the Bruce-Partington Plans”: “Mycroft draws four hundred and fifty pounds a year, remains a subordinate, has no ambitions of any kind, will receive neither honor nor title, but remains the most indispensable man in the country.”
Yet it’s Sherlock, who was forced to leave the house to find answers to his problems, whom we love more. (He’s also been held up as an exemplar of the perfect scientist.) Mycroft is hampered by both his physical laziness and his mental quickness: when a minister comes to him with a complicated problem involving “the Navy, India, Canada, and the bimetallic question,” Mycroft can provide the answer “offhand,” which doesn’t give him much of an incentive to ever leave his office or the Diogenes Club. As Holmes puts it in “The Greek Interpreter”:
You wonder…why it is that Mycroft does not use his powers for detective work. He is incapable of it…I said that he was my superior in observation and deduction. If the art of the detective began and ended in reasoning from an armchair, my brother would be the greatest criminal agent that ever lived. But he has no ambition and no energy. He will not even go out of his way to verify his own solution, and would rather be considered wrong than take the trouble to prove himself right.
Mycroft wasn’t wrong, either. He seems to have lived a very comfortable life. But it’s revealing that Conan Doyle gave the real adventures to the brother with the slightly less scintillating intelligence. In art, just as in science, technical facility can prevent certain artists from making real discoveries. The ones who have to work at it are more likely to find something real. But we can also raise a glass to Mycroft, Marilyn, and the geniuses who are smart enough not to make it too hard on themselves.
When Aleksandr Solzhenitsyn was imprisoned in the Soviet gulag, he was forced to deal with a challenge that modern writers rarely have to confront—the problem of memorization. He wanted to keep writing, but was unable to put anything on paper, which would be confiscated and read by the guards. Here’s the solution that he found, as described in The Gulag Archipelago:
I started breaking matches into little pieces and arranging them on my cigarette case in two rows (of ten each, one representing units and the others tens). As I recited the verses to myself, I displaced one bit of broken match from the units row for every line. When I shifted ten units I displaced one of the “tens”…Every fiftieth and every hundredth line I memorized with special care, to help me keep count. Once a month I recited all that I had written. If the wrong line came out in place of one of the hundreds and fifties, I went over it all again and again until I caught the slippery fugitives.
In the Kuibyshev Transit Prison I saw Catholics (Lithuanians) busy making themselves rosaries for prison use…I joined them and said that I, too, wanted to say my prayers with a rosary but that in my particular religion I needed hundred beads in a ring…that every tenth bead must be cubic, not spherical, and that the fiftieth and the hundredth beads must be distinguishable at a touch.
The Lithuanians were impressed, Solzhenitsyn says, by his “religious zeal,” and they agreed to make a rosary to his specifications, fashioning the beads out of pellets of bread and coloring them with burnt rubber, tooth powder, and disinfectant. (Later, when Solzhenitsyn realized that twenty beads were enough, he made them himself out of cork.) He concludes:
I never afterward parted with the marvelous present of theirs; I fingered and counted my beads inside my wide mittens—at work line-up, on the march to and fro from work, at all waiting times; I could do it standing up, and freezing cold was no hindrance. I carried it safely through the search points, in the padding of my mittens, where it could not be felt. The warders found it on various occasions, but supposed that it was for praying and let me keep it. Until the end of my sentence (by which time I had accumulated 12,000 lines) and after that in my places of banishment, this necklace helped me write and remember.
Ever since I first read this story, I’ve been fascinated by it, and I’ve occasionally found myself browsing the rosaries or prayer beads for sale online, wondering if I should get one for myself, just in case—although in case of what, exactly, I don’t know.
But you don’t need to be in prison to understand the importance of memorization. One of the side effects of our written and interconnected culture is that we’ve lost the ability to hold information in our heads, a trend that has only accelerated as we’ve outsourced more of our inner lives to the Internet. This isn’t necessarily a bad thing: there are good reasons for keeping a lot of this material where it can be easily referenced, without feeling the need to remember it all. (As Sherlock Holmes said in A Study in Scarlet: “I consider that a man’s brain originally is like a little empty attic, and you have to stock it with such furniture as you choose…It is a mistake to think that that little room has elastic walls and can distend to any extent.” Although given the amount of obscure information that Holmes was able to produce in subsequent stories, it’s possible that he was just kidding.) But there’s also a real loss involved. Oral cultures are marked by a highly developed verbal memory, especially for those whose livelihoods depend on it: a working poet could be expected to know hundreds of songs by heart, and the conventions of poetry itself emerged, in part, as a set of mnemonic devices. Meter, rhyme, and conventional formulas allowed many lines of verse to be recited for a paying audience—or improvised on the spot. Like the songlines of the Aboriginal Australians, an oral poem is a vehicle for the preservation of information, and it takes advantage of the human brain’s ability to retain material in a pattern that hints at what comes next. When we neglect this, we lose touch with some of the reasons that poetry evolved in the first place.
And what makes memorization particularly valuable as a creative tool is the fact that it isn’t quite perfect. When you write something down, it tends to become fixed, both physically and psychologically. (Joan Didion must have had something like this in mind when she said: “By the time you’ve laid down the first two sentences, your options are all gone.”) An idea in the brain, by contrast, remains fluid, malleable, and vital. Each time you go back to revisit it, whether using a rosary or some other indexical system, you aren’t just remembering it, but to some extent recreating it, and you’ll never get it exactly right. But just as natural selection exists because of the variations that arise from errors of transcription, a creative method that relies on memory is less accurate but more receptive to happy accidents than one that exists on the page. A line of poetry might change slightly each time we call it up, but the core idea remains, and the words that survive from one iteration to the next have persisted, by definition, because they’re memorable. We find ourselves revising and reworking the result because we have no choice, and in the process, we keep it alive. The danger, of course, is that if we don’t keep notes, any ideas we have are likely to float away without ever being realized—a phenomenon that every writer regards with dread. What we need is a structure that allows us to assign an order to the ideas in our head while preserving their ripe state of unwrittenness. Solzhenitsyn’s rosary, which was forced on him by necessity, was one possible answer, but there are others. Tomorrow, I’ll discuss another method that I’ve been using with excellent results, and which relies on a priceless mnemonic tool that we all take for granted: the alphabet.
Note: Spoilers follow for the season finale of Hannibal.
When it comes to making predictions about television shows, my track record is decidedly mixed. I was long convinced, for instance, that Game of Thrones would figure out a way to keep Oberyn Martell around, just because he was such fun to watch, and to say I was wrong about this is something of an understatement. Let the record show, however, that I said here months ago that the third season of Hannibal would end with Will Graham getting a knife through his face:
In The Silence of the Lambs, Crawford says that Graham’s face “looks like damned Picasso drew it.” None of the prior cinematic versions of this story have dared to follow through on this climax, but I have a feeling, given the evidence, that Fuller would embrace it. Taking Hugh Dancy’s face away, or making it hard for it look at, would be the ultimate rupture between the series and its viewers. Given the show’s cancellation, it may well end up being the very last thing we see. It would be a grim note on which to end. But it’s nothing that this series hasn’t taught us to expect.
This wasn’t the hardest prediction in the world to make. One of the most distinctive aspects of Bryan Fuller’s take on the Lecter saga is his willingness to pursue elements of the original novels that other adaptations have avoided, and the denouement of Red Dragon—with Will lying alone, disfigured, and mute in the hospital—is a downer ending that no other version of this story has been willing to touch.
Of course, that wasn’t what we got here, either. Instead of Will in his hospital bed, brooding silently on the indifference of the natural world to murder, we got a hysterical ballet of death, with Will and Hannibal teaming up to dispatch Dolarhyde like the water buffalo at the end of Apocalypse Now, followed by an operatic plunge over the edge of a cliff, with our two star-crossed lovers locked literally in each other’s arms. And it was a worthy finale for a series that has seemed increasingly indifferent to anything but that unholy love story. The details of Lecter’s escape from prison are wildly implausible, and whatever plan they reflect is hilariously undercooked, even for someone like Jack Crawford, who increasingly seems like the world’s worst FBI agent in charge. Hannibal has never been particularly interested its procedural elements, and its final season took that contempt to its final, ludicrous extreme. In the novel Red Dragon, Will, despite his demons, is a competent, inspired investigator, and he’s on the verge of apprehending Dolaryhyde through his own smarts when his quarry turns the tables. In Fuller’s version, unless I missed something along the way, Will doesn’t make a single useful deduction or take any meaningful action that isn’t the result of being manipulated by Hannibal or Jack. He’s a puppet, and dangerously close to what TV Tropes has called a Woobie: a character whom we enjoy seeing tortured so we can wish the pain away.
None of this should be taken as a criticism of the show itself, in which any narrative shortcomings can hardly be separated from Fuller’s conscious decisions. But as enjoyable as the series has always been—and I’ve enjoyed it more than any network drama I’ve seen in at least a decade—it’s something less than an honest reckoning with its material. As a rule of thumb, the stories about Lecter, including Harris’s own novels, have been the most successful when they stick most closely to their roots as police procedurals. Harris started his career as a crime reporter, and his first three books, including Black Sunday, are masterpieces of the slow accumulation of convincing detail, spiced and enriched by a layer of gothic violence. When you remove that foundation of realistic suspense, you end up with a character who is dangerously uncontrollable: it’s Lecter, not Harris, who becomes the author of his own novel. In The Annotated Dracula, Leslie S. Klinger proposes a joke theory that the real author of that book is Dracula himself, who tracked down Bram Stoker and forced him to make certain changes to conceal the fact that he was alive and well and living in Transylvania. It’s an “explanation” that rings equally true of the novels Hannibal and Hannibal Rising, which read suspiciously as if Lecter were dictating elements of his own idealized autobiography to Harris. (As far as I know, nobody has seen or heard from Harris since Hannibal Rising came out almost a decade ago. Are we sure he’s all right?)
And there are times when Hannibal, the show, plays as if Lecter had gotten an executive producer credit sometime between the second and third seasons. If anything, this is a testament to his vividness: when properly acted and written, he dominates his stories to a greater extent than any fictional character since Sherlock Holmes. (In fact, the literary agent hypothesis—in which the credited writer of a series is alleged to be simply serving as a front—originated among fans of Conan Doyle, who often seemed bewildered by the secondary lives his characters assumed.) But there’s something unsettling about how Lecter inevitably takes on the role of a hero. My favorite stretch of Hannibal was the back half of the second season, which looked unflinchingly at Lecter’s true nature as a villain, cannibal, and destroyer of lives. When he left the entire supporting cast to bleed slowly to death at the end of “Mizumono,” it seemed impossible to regard him as an appealing figure ever again. And yet here we are, with an ending that came across as the ultimate act of fan service in a show that has never been shy about appealing to its dwindling circle of devotees. I can’t exactly blame it for this, especially because the slow dance of seduction between Will and Hannibal has always been a source of sick, irresistible fascination. But we’re as far ever from an adaptation that would force us to honestly confront why we’re so attached to a man who eats other people, or why we root for him to triumph over lesser monsters who make the mistake of not being so rich, cultured, or amusing. Lecter came into this season like a lion, but he went out, as always, like a lamb.
In evolutionary theory, there’s a concept known as exaptation, in which a trait that evolved because it met a particular need turns out to be just as useful for something else. Feathers, for instance, originally provided a means of regulating body temperature, but they ended up being crucial in the development of flight, and in other cases, a trait that played a secondary or supporting role to another adaptation becomes important enough to serve an unrelated purpose of its own. We see much the same process at work in genre fiction, which is subject to selective pressures from authors, editors, and especially readers. The genres we see today, like suspense or romance, might seem inevitable, but their conventions are really just a set of the recipes or tricks that worked. Such innovations are rarely introduced as a conscious attempt to define a new category of fiction, but as solutions to the problems that a specific narrative presents. The elements we see in Jane Eyre—the isolated house, the orphaned heroine, the employer with a mysterious past—arose from Charlotte Brontë’s confrontation with that particular story, but they worked so well that they were appropriated by a cohort of other writers, working in the now defunct genre of the gothic romance. And I suspect that Brontë would be as surprised as anyone by the uses to which her ideas have been put.
It’s rare for a genre to emerge, as gothic romance did, from a single book; more often, it’s the result of small shifts in a wide range of titles, with each book accidentally providing a useful tool that is picked up and used by others. Repeat the process for a generation or two, and you end up with a set of conventions to which later writers will repeatedly return. And as with other forms of natural selection, a secondary adaptation, introduced to enable something else, can evolve to take over the whole genre. The figure of the detective or private eye is a good example. When you look at the earliest works of mystery fiction we have, from Bleak House to The Moonstone, you often find that the detective plays a minor role: he pops up toward the middle of the story, he nudges the plot along when necessary, and he defers whenever possible to the other characters. Even in A Study in Scarlet, Holmes is only one character among many, and the book drops him entirely in favor of a long flashback about the Mormons. Ultimately, though, the detective—whose initial role was purely functional—evolved to become the central attraction, with the romantic leads who were the focus of attention in Dickens or Collins reduced to the interchangeable supporting players of an Agatha Christie novel. The detective was originally just a way of feathering the story; in the end, he was what allowed the genre to take flight.
You see something similar in suspense’s obsession with modes of transportation. One of the first great attractions of escapist spy fiction lay in the range of locations it presented: it allowed readers to vicariously travel to various exotic locales. (This hasn’t changed, either: the latest Mission: Impossible movie takes us to Belarus, Cuba, Virginia, Paris, Vienna, Casablanca, and London.) The planes, trains, and automobiles that fill such novels were meant simply to get the characters from place to place. Over time, though, they became set pieces in their own right. I’ve noted elsewhere that what we call an airport novel was literally a story set largely in airports, as characters flew from one exciting setting to another, and you could compile an entire anthology of thriller scenes set on trains or planes. At first, they were little more than connective tissue—you had to show the characters going from point A to point B, and the story couldn’t always cut straight from Lisbon to Marrakesh—but these interstitial scenes ultimately evolved into a point of interest in themselves. They also play a useful structural role. Every narrative requires a few pauses or transitions to gather itself between plot points, and staging such scenes on an interesting form of transport makes it seem as if the story is advancing, even if it’s taking a breather.
In Eternal Empire, for instance, there’s an entire chapter focusing on Ilya and his minder Bogdan as they take the Cassiopeia railway from Paris to Munich. There’s no particular reason it needs to exist at all, and although it contains some meaningful tidbits of backstory, I could have introduced this material in any number of other ways. But I wanted to write a train scene, in part as an homage to the genre, in part because it seemed unrealistic to leave Ilya’s fugitive journey undescribed, and in part because it gave me the setting I needed. There’s a hint of subterfuge, with my two travelers moving from one train station to another under false passports, and a complication in the fact that neither can bring a gun onboard, leaving them both unarmed. Really, though, it’s a scene about two men sizing each other up, and thrillers have long since learned that a train is the best place for such conversations, which is why characters always seem to be coming and going at railway stations. (In the show Hannibal, Will and Chiyo spend most of an episode on an overnight train to Florence, although they easily could have flown. It ends with Chiyo shoving Will onto the tracks, but I suspect that it’s really there to give them a chance to talk, which wouldn’t play as well on a plane.) Ilya and Bogdan have a lot to talk about. And when they get to their destination, they’ll have even more to say…
A few days ago, I was leafing through a Sesame Street coloring book with my daughter when I was hit by a startling realization: I couldn’t remember the color of Bert’s nose. I’ve watched Bert and Ernie for what has to be hundreds of hours—much of it in the last six months—and I know more about them than I do about most characters in novels. But for the life of me, I couldn’t remember what color Bert’s nose was, and I was on the point of looking up a picture in The Sesame Street Dictionary when it finally came to me. As I continued to page through the coloring book, though, I found that I had trouble recalling a lot of little details. Big Bird’s legs, for instance, are orange cylinders segmented by thin contour lines, but what color are those lines? What about Elmo’s nose? Or the stripes on Bert and Ernie’s shirts? In the end, I repeatedly found myself going online to check. And while the last thing I want is to set down rules for what crayons my daughter can and can’t use when coloring her favorite characters, as a writer, and particularly one for whom observation and accuracy of description have always been important, I was secretly chagrined.
They aren’t isolated cases, either. My memory, like everyone else’s, has areas of greater and lesser precision: I have an encyclopedic recall of movie release dates, but have trouble putting a name to a face until I’ve met a person a couple of times. Like most of us, I remember images as chunks of information, and when I try to drill down to recall particular details, I feel like Watson in his exchange with Holmes in “A Scandal in Bohemia”:
“For example, you have frequently seen the steps which lead up from the hall to this room.”
“Well, some hundreds of times.”
“Then how many are there?”
“How many? I don’t know.”
“Quite so! You have not observed. And yet you have seen. That is just my point. Now, I know that there are seventeen steps, because I have both seen and observed.”
And I find it somewhat peculiar—and I’m not alone here—that I was able to remember and locate this quote without any effort, while I still couldn’t tell you the number of steps that lead to the front porch of my own house.
Of course, none of this is particularly surprising, if we’ve thought at all about how our own memories work. A mental image is really more of an impression that disappears like a mirage as soon as we try to get any closer, and it’s particularly true of the objects we take most for granted. When we think of our own pasts, it’s the exceptional moments that we remember, while the details of everyday routine seem to evaporate without a trace: I recall all kinds of things about my trip to Peru, but I can barely remember what my average day was like before my daughter was born. This kind of selective amnesia is so common that it doesn’t even seem worth mentioning. But it raises a legitimate question of whether this represents a handicap for a writer, or even disqualifies us from doing interesting work. In a letter to the novelist James Jones, the editor Maxwell Perkins once wrote:
I remember reading somewhere what I thought was a very true statement to the effect that anybody could find out if he was a writer. If he were a writer, when he tried to write out of some particular day, he found that he could recall exactly how the light fell and how the temperature felt, and all the quality of it. Most people cannot do it. If they can do it, they may never be successful in a pecuniary sense, but that ability is at the bottom of writing, I am sure.
For those of us who probably wouldn’t notice if someone quietly switched our toothbrushes, as Faye Wong does to Tony Leung in Chungking Express, this may seem disheartening. But I’d like to believe that memory and observation can be cultivated, like any writing skill, or that we can at least learn how to compensate for our own weaknesses. Some writers, like Nabokov or Updike, were born as monsters of noticing, but for the rest of us, some combination of good notes, close attention to the techniques of the writers we admire, and the directed observation required to solve particular narrative problems can go a long way toward making up the difference. (I emphasize specific problems because it’s more useful, in the long run, to figure out how to describe something within the context of a story than to work on self-contained writing exercises.) Revision, too, can work wonders: a full page of description distilled to a short paragraph, leaving only its essentials, can feel wonderfully packed and evocative. Our memories are selective for a reason: if we remembered everything, we’d have trouble knowing what was important. It’s better, perhaps, to muddle through as best as we can, turning on that novelistic degree of perception only when it counts—or, more accurately, when our intuition tells us that it counts. And when it really matters, we can always go back and verify that Bert’s nose, in fact, is orange.