Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Posts Tagged ‘David Thomson

On a wing and a prayer

leave a comment »

“It was the greatest career move in the history of entertainment,” David Thomson writes in an entry in The New Biographical Dictionary of Film. He’s speaking, of course, of Ronald Reagan:

He was a hugely successful and evasive president, as blind to disaster, inquiry, and humiliation as he was to the Constitution. And he was as lucky as he had been a loser in pictures…To paraphrase Gore Vidal, the wisdom and integrity of someone told where to stand and what to say for twenty years were made manifest. The fraudulence of the presidency was revealed so that the office could never quite be honored again.

When I look at these lines now, especially that last sentence, they can start to seem rather quaint. But Reagan has a lot to tell us about Trump, and not simply because he looks so much better by comparison. “An actor is playing the president,” Paul Slansky lamented in The Clothes Have No Emperor, a book—with its painstaking chronology of the unlikely events of the Reagan Administration—that looks increasingly funny, resonant, and frightening these days. Yet the presidency has always been something of a performance. As Malcolm Gladwell recently noted to The Undefeated, most presidents have been white men of a certain age and height:

Viewed statistically it’s absurd. Why would you limit your search for the most important job in the land to this tiny group of people? But it’s an incredibly common thing. We do a category selection before we do individual analysis.

In other words, we cast men who look the part, and then we judge them by how well they fulfill our idea of the role.

Reagan, like Trump, was unusually prone to improvising, or, in Thomson’s words, “deftly feeding the lines and situations of Warner Brothers in the 1940s back into world affairs.” Occasionally, he would tell a story to put himself in a favorable light, as when he made the peculiar claim—to Yitzhak Shamir and Simon Wiesenthal, no less—that he had personally shot documentary film of the concentration camps after World War II. (In reality, Reagan spent the war in Hollywood, where he assisted in processing footage taken by others in Europe.) But sometimes his reasons were harder to pin down. On December 12, 1983, Reagan told a story in a speech to the annual convention of the Congressional Medal Honor Society:

A B‑17 was coming back across the channel from a raid over Europe, badly shot up by anti‑aircraft; the ball turret that hung underneath the belly of the plane had taken a hit. The young ball‑turret gunner was wounded, and they couldn’t get him out of the turret there while flying. But over the channel, the plane began to lose altitude, and the commander had to order, “Bail out.” And as the men started to leave the plane, the last one to leave—the boy, understandably, knowing he was being left behind to go down with the plane, cried out in terror—the last man to leave the plane saw the commander sit down on the floor. He took the boy’s hand and said, “Never mind, son, we’ll ride it down together.” Congressional Medal of honor posthumously awarded.

Reagan recounted this story on numerous other occasions. But as Lars-Erik Nelson, the Washington bureau chief for the New York Daily News, subsequently determined, after checking hundreds of Medal of Honor citations from World War II: “It didn’t happen. It’s a Reagan story…The president of the United States went before an audience of three hundred real Congressional Medal of Honor winners and told them about a make‑believe Medal of Honor winner.”

There’s no doubt that Reagan, who often grew visibly moved as he recounted this story, believed that it was true, and it has even been used as a case study in the creation of false memories. Nelson traced it back to a scene in the 1944 movie Wing and a Prayer, as well as to a similar apocryphal item that appeared that year in Reader’s Digest. (The same story, incidentally, later became the basis for an episode of Amazing Stories, “The Mission,” starring Kevin Costner and Kiefer Sutherland and directed by Steven Spielberg. Tony Kushner once claimed that Spielberg’s movies “are the flagship aesthetic statements of Reaganism,” and this is the most compelling point I’ve seen in that argument’s favor.) But the most Trumpian aspect of the entire incident was the response of Reagan’s staff. As the Washington Post reported a few days later:

A determined White House is searching the records of American servicemen awarded the Medal of Honor in an effort to authenticate a disputed World War II story President Reagan told last week at a ceremony honoring recipients of the medal…The White House then began checking records to document the episode. Reagan is said by aides to be certain that he saw the citation exactly as he recounted it. The citations are summarized in a book published by Congress, but none of these summaries seem to fit precisely the episode Reagan described, although some are similar…The White House is now attempting to look beyond the summaries to more detailed accounts to see if one of the episodes may be the one Reagan mentioned. “We will find it,” said Misty Church, a researcher for the White House.

They never did. And the image of White House staffers frantically trying to justify something that the president said off the cuff certainly seems familiar today.

But what strikes me the most about this story is that Reagan himself had nothing to gain from it. Most of Trump’s fabrications are designed to make him look better, more successful, or more impressive than he actually is, while Reagan’s fable is rooted in a sentimental ideal of heroism itself. (It’s hard to even imagine a version of this story that Trump might have told, since the most admirable figure in it winds up dead. As Trump might say, he likes pilots who weren’t shot down.) Which isn’t to say that Reagan’s mythologizing isn’t problematic in itself, as Nelson pointed out:

[It’s] the difference between a make-believe pilot, dying nobly and needlessly to comfort a wounded boy, and the real-life pilots, bombardiers and navigators who struggled to save their planes, their crews and themselves and died trying. It’s the difference between war and a war story.

And while this might seem preferable to Trump’s approach, which avoids any talk of sacrifice in favor of scenarios in which everybody wins, or we stick other people with the cost of our actions, it still closes off higher levels of thought in favor of an appeal to emotion. Reagan was an infinitely more capable actor than Trump, and he was much easier to love, which shouldn’t blind us to what they have in common. They were both winging it. And the most characteristic remark to come out of the whole affair is how Larry Speakes, the White House spokesman under Reagan, responded when asked if the account was accurate: “If you tell the same story five times, it’s true.”

Listen without prejudice

with 4 comments

George Michael

In The Biographical Dictionary of Film, David Thomson says of Tuesday Weld: “If she had been ‘Susan Weld’ she might now be known as one of our great actresses.” The same point might hold true of George Michael, who was born Georgios Kyriacos Panayiotou and chose a nom de mike—with its unfortunate combination of two first names—that made him seem frothy and lightweight. If he had called himself, say, George Parker, he might well have been regarded as one of our great songwriters, which he indisputably was. In the past, I’ve called Tom Cruise a brilliant producer who happened to be born into the body of a movie star, and George Michael had the similar misfortune of being a perversely inventive and resourceful recording artist who was also the most convincing embodiment of a pop superstar that anybody had ever seen. It’s hard to think of another performer of that era who had so complete a package: the look, the voice, the sexuality, the stage presence. The fact that he was gay and unable to acknowledge it for so long was an undeniable burden, but it also led him to transform himself into what would have been almost a caricature of erotic assertiveness if it hadn’t been delivered so earnestly. Like Cary Grant, a figure with whom he might otherwise seem to have little in common, he turned himself into exactly what he thought everyone wanted, and he did it so well that he was never allowed to be anything else.

But consider the songs. Michael was a superb songwriter from the very beginning, and “Everything She Wants,” “Last Christmas,” “Careless Whisper,” and “A Different Corner,” which he all wrote in his early twenties, should be enough to silence any doubts about his talent. His later songs could be exhausting in their insistence on doubling as statements of purpose. But it’s Faith, and particularly the first side of the album and the coda of “Kissing a Fool,” that never fails to fill me with awe. It was a clear declaration that this was a young man, not yet twenty-five, who was capable of anything, and he wasn’t shy about alerting us to the fact: the back of the compact disc reads “Written, Arranged, and Produced by George Michael.” In those five songs, Michael nimbly tackles so many different styles and tones that it threatens to make the creation of timeless pop music seem as mechanical a process as it really is. A little less sex and a lot more irony, and you’d be looking at as skilled a chameleon as Stephin Merritt—which is another comparison that I didn’t think I’d ever make. But on his best day, Michael was the better writer. “One More Try” has meant a lot to me since the moment I first heard it, while “I Want Your Sex” is one of those songs that would sound revolutionary in any decade. When you listen to the Monogamy Mix, which blends all three sections together into a monster track of thirteen minutes, you start to wonder if we’ve caught up to it even now.

George Michael and Andrew Ridgeley

These songs have been part of the background of my life for literally as long as I can remember—the music video for “Careless Whisper” was probably the first one I ever saw, except maybe for “Thriller,” and I can’t have been more than five years old. Yet I never felt like I understood George Michael in the way I thought I knew, say, the Pet Shop Boys, who also took a long time to get the recognition they deserved. (They also settled into their roles as elder statesmen a little too eagerly, while Michael never seemed comfortable with his cultural position at any age.) For an artist who told us what he thought in plenty of songs, he remained essentially unknowable. Part of it was due to that glossy voice, one of the best of its time, especially when it verged on Alison Moyet territory. But it often seemed like just another instrument, rather than a piece of himself. Unlike David Bowie, who assumed countless personas that still allowed the man underneath to peek through, Michael wore his fame, in John Updike’s words, like a mask that ate into the face. His death doesn’t feel like a personal loss to me, in the way that Bowie did, but I’ve spent just about as much time listening to his music, even if you don’t count all the times I’ve played “Last Christmas” in an endless loop on Infinite Jukebox.

In the end, it was a career that was bound to seem unfinished no matter when or how it ended. Its back half was a succession of setbacks and missed opportunities, and you could argue that its peak lasted for less than four years. The last album of his that I owned was the oddball Songs from the Last Century, in which he tried on a new role—a lounge singer of old standards—that would have been ludicrous if it hadn’t been so deeply heartfelt. It wasn’t a persuasive gesture, because he didn’t need to sing somebody else’s songs to sound like part of the canon. That was seventeen years ago, or almost half my lifetime. There were long stretches when he dropped out of my personal rotation, but he always found his way back: “Wake Me Up Before You Go-Go” even played at my wedding. “One More Try” will always be my favorite, but the snippet that has been in my head the most is the moment in “Everything She Wants” when Michael just sings: Uh huh huh / Oh, oh / Uh huh huh / Doo doo doo / La la la la… Maybe he’s just marking time, or he wanted to preserve a melodic idea that didn’t lend itself to words, or it was a reflection of the exuberance that Wesley Morris identifies in his excellent tribute in the New York Times: “There aren’t that many pop stars with as many parts of as many songs that are as exciting to sing as George Michael has—bridges, verses, the fillips he adds between the chorus during a fade-out.” But if I were trying to explain what pop music was all about to someone who had never heard it, I might just play this first.

The steady hand

with 2 comments

Danny Lloyd in The Shining

Forty years ago, the cinematographer Garrett Brown invented the Steadicam. It was a stabilizer attached to a harness that allowed a camera operator, walking on foot or riding in a vehicle, to shoot the kind of smooth footage that had previously only been possible using a dolly. Before long, it had revolutionized the way in which both movies and television were shot, and not always in the most obvious ways. When we think of the Steadicam, we’re likely to remember virtuoso extended takes like the Copacabana sequence in Goodfellas, but it can also be a valuable tool even when we aren’t supposed to notice it. As the legendary Robert Elswit said recently to the New York Times:

“To me, it’s not a specialty item,” he said. “It’s usually there all the time.” The results, he added, are sometimes “not even necessarily recognizable as a Steadicam shot. You just use it to get something done in a simple way.”

Like digital video, the Steadicam has had a leveling influence on the movies. Scenes that might have been too expensive, complicated, or time-consuming to set up in the conventional manner can be done on the fly, which has opened up possibilities both for innovative stylists and for filmmakers who are struggling to get their stories made at all.

Not surprisingly, there are skeptics. In On Directing Film, which I think is the best book on storytelling I’ve ever read, David Mamet argues that it’s a mistake to think of a movie as a documentary record of what the protagonist does, and he continues:

The Steadicam (a hand-held camera), like many another technological miracle, has done injury; it has injured American movies, because it makes it so easy to follow the protagonist around, one no longer has to think, “What is the shot?” or “Where should I put the camera?” One thinks, instead, “I can shoot the whole thing in the morning.”

This conflicts with Mamet’s approach to structuring a plot, which hinges on dividing each scene into individual beats that can be expressed in purely visual terms. It’s a method that emerges naturally from the discipline of selecting shots and cutting them together, and it’s the kind of hard work that we’re often tempted to avoid. As Mamet adds in a footnote: “The Steadicam is no more capable of aiding in the creation of a good movie than the computer is in the writing of a good novel—both are labor-saving devices, which simplify and so make more attractive the mindless aspects of creative endeavor.” The casual use of the Steadicam seduces directors into conceiving of the action in terms of “little plays,” rather than in fundamental narrative units, and it removes some of the necessity of disciplined thinking beforehand.

Michael Keaton and Edward Norton in Birdman

But it isn’t until toward the end of the book that Mamet delivers his most ringing condemnation of what the Steadicam represents:

“Wouldn’t it be nice,” one might say, “if we could get this hall here, really around the corner from that door there; or to get that door here to really be the door that opens on the staircase to that door there? So we could just movie the camera from one to the next?”

It took me a great deal of effort and still takes me a great deal and will continue to take me a great deal of effort to answer the question thusly: no, not only is it not important to have those objects literally contiguous; it is important to fight against this desire, because fighting it reinforces an understanding of the essential nature of film, which is that it is made of disparate shorts, cut together. It’s a door, it’s a hall, it’s a blah-blah. Put the camera “there” and photograph, as simply as possible, that object. If we don’t understand that we both can and must cut the shots together, we are sneakily falling victim to the mistaken theory of the Steadicam.

This might all sound grumpy and abstract, but it isn’t. Take Birdman. You might well love Birdman—plenty of viewers evidently did—but I think it provides a devastating confirmation of Mamet’s point. By playing as a single, seemingly continuous shot, it robs itself of the ability to tell the story with cuts, and it inadvertently serves as an advertisement of how most good movies come together in the editing room. It’s an audacious experiment that never needs to be tried again. And it wouldn’t exist at all if it weren’t for the Steadicam.

But the Steadicam can also be a thing of beauty. I don’t want to discourage its use by filmmakers for whom it means the difference between making a movie under budget and never making it at all, as long as they don’t forget to think hard about all of the constituent parts of the story. There’s also a place for the bravura long take, especially when it depends on our awareness of the unfaked passage of time, as in the opening of Touch of Evil—a long take, made without benefit of a Steadicam, that runs the risk of looking less astonishing today because technology has made this sort of thing so much easier. And there’s even room for the occasional long take that exists only to wow us. De Palma has a fantastic one in Raising Cain, which I watched again recently, that deserves to be ranked among the greats. At its best, it can make the filmmaker’s audacity inseparable from the emotional core of the scene, as David Thomson observes of Goodfellas: “The terrific, serpentine, Steadicam tracking shot by which Henry Hill and his girl enter the Copacabana by the back exit is not just his attempt to impress her but Scorsese’s urge to stagger us and himself with bravura cinema.” The best example of all is The Shining, with its tracking shots of Danny pedaling his Big Wheel down the deserted corridors of the Overlook. It’s showy, but it also expresses the movie’s basic horror, as Danny is inexorably drawn to the revelation of his father’s true nature. (And it’s worth noting that much of its effectiveness is due to the sound design, with the alternation of the wheels against the carpet and floor, which is one of those artistic insights that never grows dated.) The Steadicam is a tool like any other, which means that it can be misused. It can be wonderful, too. But it requires a steady hand behind the camera.

The low road to Xanadu

with 3 comments

Orson Welles in Citizen Kane

It was a miracle of rare device,
A sunny pleasure-dome with caves of ice!

—Samuel Taylor Coleridge, “Kubla Khan”

A couple of weeks ago, I wrote of Donald Trump: “He’s like Charles Foster Kane, without any of the qualities that make Kane so misleadingly attractive.” If anything, that’s overly generous to Trump himself, but it also points to a real flaw in what can legitimately be called the greatest American movie ever made. Citizen Kane is more ambiguous than it was ever intended to be, because we’re distracted throughout by our fondness for the young Orson Welles. He’s visible all too briefly in the early sequences at the Inquirer; he winks at us through his makeup as an older man; and the aura he casts was there from the beginning. As David Thomson points out in The New Biographical Dictionary of Film:

Kane is less about William Randolph Hearst—a humorless, anxious man—than a portrait and prediction of Welles himself. Given his greatest opportunity, [screenwriter Herman] Mankiewicz could only invent a story that was increasingly colored by his mixed feelings about Welles and that, he knew, would be brought to life by Welles the overpowering actor, who could not resist the chance to dress up as the old man he might one day become, and who relished the young showoff Kane just as he loved to hector and amaze the Mercury Theater.

You can see Welles in the script when Susan Alexander asks Kane if he’s “a professional magician,” or when Kane, asked if he’s still eating, replies: “I’m still hungry.” And although his presence deepens and enhances the movie’s appeal, it also undermines the story that Welles and Mankiewicz set out to tell in the first place.

As a result, the film that Hearst wanted to destroy turned out to be the best thing that could have happened to his legacy—it makes him far more interesting and likable than he ever was. The same factor tends to obscure the movie’s politics. As Pauline Kael wrote in the early seventies in the essay “Raising Kane”: “At some campus showings, they react so gullibly that when Kane makes a demagogic speech about ‘the underprivileged,’ stray students will applaud enthusiastically, and a shout of ‘Right on!’ may be heard.” But in an extraordinary review that was published when the movie was first released, Jorge Luis Borges saw through to the movie’s icy heart:

Citizen Kane…has at least two plots. The first, pointlessly banal, attempts to milk applause from dimwits: a vain millionaire collects statues, gardens, palaces, swimming pools, diamonds, cars, libraries, men and women…The second plot is far superior…At the end we realize that the fragments are not governed by any apparent unity: the detested Charles Foster Kane is a simulacrum, a chaos of appearances…In a story by Chesterton—“The Head of Caesar,” I think—the hero observes that nothing is so frightening as a labyrinth with no center. This film is precisely that labyrinth.

Borges concludes: “We all know that a party, a palace, a great undertaking, a lunch for writers and journalists, an enterprise of cordial and spontaneous camaraderie, are essentially horrendous. Citizen Kane is the first film to show such things with an awareness of this truth.” He might well be talking about the Trump campaign, which is also a labyrinth without a center. And Trump already seems to be preparing for defeat with the same defense that Kane did.

Everett Sloane in Citizen Kane

Yet if we’re looking for a real counterpart to Kane, it isn’t Trump at all, but someone standing just off to the side: his son-in-law, Jared Kushner. I’ve been interested in Kushner’s career for a long time, in part because we overlapped at college, although I doubt we’ve ever been in the same room. Ten years ago, when he bought the New York Observer, it was hard not to think of Kane, and not just because Kushner was twenty-five. It recalled the effrontery in Kane’s letter to Mr. Thatcher: “I think it would be fun to run a newspaper.” And I looked forward to seeing what Kushner would do next. His marriage to Ivanka Trump was a twist worthy of Mankiewicz, who married Kane to the president’s daughter, and as Trump lurched into politics, I wasn’t the only one wondering what Ivanka and Kushner—whose father was jailed after an investigation by Chris Christie—made of it all. Until recently, you could kid yourself that Kushner was torn between loyalty to his wife’s father and whatever else he might be feeling, even after he published his own Declaration of Principles in the Observer, writing: “My father-in-law is not an anti-Semite.” But that’s no longer possible. As the Washington Post reports, Kushner, along with former Breitbart News chief Stephen K. Bannon, personally devised the idea to seat Bill Clinton’s accusers in the family box at the second debate. The plan failed, but there’s no question that Kushner has deliberately placed himself at the center of Trump’s campaign, and that he bears an active, not passive, share of the responsibility for what promises to be the ugliest month in the history of presidential politics.

So what happened? If we’re going to press the analogy to its limit, we can picture the isolated Kane in his crumbling estate in Xanadu. It was based on Hearst Castle in San Simeon, and the movie describes it as standing on the nonexistent desert coast of Florida—but it could just as easily be a suite in Trump Tower. We all tend to surround ourselves with people with whom we agree, whether it’s online or in the communities in which we live, and if you want to picture this as a series of concentric circles, the ultimate reality distortion field must come when you’re standing in a room next to Trump himself. Now that Trump has purged his campaign of all reasonable voices, it’s easy for someone like Kushner to forget that there is a world elsewhere, and that his actions may not seem sound, or even sane, beyond those four walls. Eventually, this election will be over, and whatever the outcome, I feel more pity for Kushner than I do for his father-in-law. Trump can only stick around for so much longer, while Kushner still has half of his life ahead of him, and I have a feeling that it’s going to be defined by his decisions over the last three months. Maybe he’ll realize that he went straight from the young Kane to the old without any of the fun in between, and that his only choice may be to wall himself up in Xanadu in his thirties, with the likes of Christie, Giuliani, and Gingrich for company. As the News on the March narrator says in Kane: “An emperor of newsprint continued to direct his failing empire, vainly attempted to sway, as he once did, the destinies of a nation that had ceased to listen to him, ceased to trust him.” It’s a tragic ending for an old man. But it’s even sadder for a young one.

The excerpt opinion

leave a comment »

Norman Mailer

“It’s the rare writer who cannot have sentences lifted from his work,” Norman Mailer once wrote. What he meant is that if a reviewer is eager to find something to mock, dismiss, or pick apart, any interesting book will provide plenty of ammunition. On a simple level of craft, it’s hard for most authors to sustain a high pitch of technical proficiency in every line, and if you want to make a novelist seem boring or ordinary, you can just focus on the sentences that fall between the high points. In his famously savage takedown of Thomas Harris’s Hannibal, Martin Amis quotes another reviewer who raved: “There is not a single ugly or dead sentence.” Amis then acidly observes:

Hannibal is a genre novel, and all genre novels contain dead sentences—unless you feel the throb of life in such periods as “Tommaso put the lid back on the cooler” or “Eric Pickford answered” or “Pazzi worked like a man possessed” or “Margot laughed in spite of herself” or “Bob Sneed broke the silence.”

Amis knows that this is a cheap shot, and he glories in it. But it isn’t so different from what critics do when they list the awful sentences from a current bestseller or nominate lines for the Bad Sex in Fiction Award. I laugh at this along with anyone else, but I also wince a little, because there are few authors alive who aren’t vulnerable to that sort of treatment. As G.K. Chesterton pointed out: “You could compile the worst book in the world entirely out of selected passages from the best writers in the world.”

This is even more true of authors who take considerable stylistic or thematic risks, which usually result in individual sentences that seem crazy or, worse, silly. The fear of seeming ridiculous is what prevents a lot of writers from taking chances, and it isn’t always unjustified. An ambitious novel opens itself up to savaging from all sides, precisely because it provides so much material that can be turned against the author when taken out of context. And it doesn’t need to be malicious, either: even objective or actively sympathetic critics can be seduced by the ease with which a writer can be excerpted to make a case. I’ve become increasingly daunted by the prospect of distilling the work of Robert A. Heinlein, for example, because his career was so long, varied, and often intentionally provocative that you can find sentences to support any argument about him that you want to make. (It doesn’t help that his politics evolved drastically over time, and they probably would have undergone several more transformations if he had lived for longer.) This isn’t to say that his opinions aren’t a fair target for criticism, but any reasonable understanding of who Heinlein was and what he believed—which I’m still trying to sort out for myself—can’t be conveyed by a handful of cherry-picked quotations. Literary biography is useful primarily to the extent that it can lay out a writer’s life in an orderly fashion, providing a frame that tells us something about the work that we wouldn’t know by encountering it out of order. But even that involves a process of selection, as does everything else about a biography. The biographer’s project isn’t essentially different from that of a working critic or reviewer: it just takes place on a larger scale.

John Updike

And it’s worth noting that prolific critics themselves are particularly susceptible to this kind of treatment. When Renata Adler described Pauline Kael’s output as “not simply, jarringly, piece by piece, line by line, and without interruption, worthless,” any devotee of Kael’s work had to disagree—but it was also impossible to deny that there was plenty of evidence for the prosecution. If you’re determined to hate Roger Ebert, you just have to search for the reviews in which his opinions, written on deadline, weren’t sufficiently in line with the conclusions reached by posterity, as when he unforgivably gave only three stars to The Godfather Part II. And there isn’t a single page in the work of David Thomson, who is probably the most interesting movie critic who ever lived, that couldn’t be mined for outrageous, idiotic, or infuriating statements. I still remember a review on The A.V. Club of How to Watch a Movie that quoted lines like this:

Tell me a story, we beg as children, while wanting so many other things. Story will put off sleep (or extinction) and the child’s organism hardly trusts the habit of waking yet.

And this:

You came into this book under deceptive promises (mine) and false hopes (yours). You believed we might make decisive progress in the matter of how to watch a movie. So be it, but this was a ruse to make you look at life.

The reviewer quoted these sentences as examples of the book’s deficiencies, and they were duly excoriated in the comments. But anyone who has really read Thomson knows that such statements are part of the package, and removing them would also deny most of what makes him so fun, perverse, and valuable.

So what’s a responsible reviewer to do? We could start, maybe, by quoting longer or complete sections, rather than sentences in isolation, and by providing more context when we offer up just a line or two. We can also respect an author’s feelings, explicit or otherwise, about what sections are actually important. In the passage I mentioned at the beginning of this post, which is about John Updike, Mailer goes on to quote a few sentences from Rabbit, Run, and he adds:

The first quotation is taken from the first five sentences of the book, the second is on the next-to-last page, and the third is nothing less than the last three sentences of the novel. The beginning and end of a novel are usually worked over. They are the index to taste in the writer.

That’s a pretty good rule, and it ensures that the critic is discussing something reasonably close to what the writer intended to say. Best of all, we can approach the problem of excerpting with a kind of joy in the hunt: the search for the slice of a work that will stand as a synecdoche of the whole. In the book U & I, which is also about Updike, Nicholson Baker writes about the “standardized ID phrase” and “the aphoristic consensus” and “the jingle we will have to fight past at some point in the future” to see a writer clearly again, just as fans of Joyce have to do their best to forget about “the ineluctable modality of the visible” and “yes I said yes I will Yes.” For a living author, that repository of familiar quotations is constantly in flux, and reviewers might approach their work with a greater sense of responsibility if they realized that they were playing a part in creating it—one tiny excerpt at a time.

Astounding Stories #14: The Heinlein Juveniles

with 2 comments

Have Space Suit—Will Travel

Note: As I dive into the research process for my upcoming book Astounding: John W. Campbell, Isaac Asimov, Robert A. Heinlein, L. Ron Hubbard, and the Golden Age of Science Fiction, I’ll be taking the opportunity to highlight works within the genre that deserve to be rediscovered, reappraised, or simply enjoyed by a wider audience. You can read the earlier installments here

“There is a major but very difficult realization that needs to be reached about [Cary] Grant—difficult, that is, for many people who like to think they take the art of film seriously,” David Thomson writes in The New Biographical Dictionary of Film. The realization, he says, is that along with being a great movie star and a beloved style icon, Grant was “the best and most important actor in the history of the cinema.” There’s a comparable realization, I’ve decided, that has to be reached about Robert A. Heinlein. As well as being a cult figure, the first science fiction writer to break through to the mainstream, and an object of veneration for countless fans, he was also the best writer the genre ever produced. And believe me, I know how boring this sounds. Frankly, I’d love to come up with a contrarian stance—that Heinlein is interesting primarily for his historical significance, that he’s revered mostly out of nostalgia, or that a handful of masterpieces allow us to overlook the fact that much of what he wrote was routine. But none of this is true. Of all the science fiction writers I’ve read, Heinlein is consistently the most compelling author, the most interesting thinker, and the most versatile artist. He’s the one writer of his era who could seemingly do anything, and who actually did it over an extended period of time for a big popular audience: great ideas, meticulously developed science and technology, worldbuilding, plot, action, character, philosophy, style. Heinlein was given what the sports writer Bill Simmons likes to call the “everything” package at the car wash, and he more than lived up to it. To a very real extent, Heinlein was the golden age of science fiction, and it’s hard to imagine John W. Campbell doing any of it without him.

This doesn’t mean that Heinlein was a perfect writer. For all the smart, tough, attractive women in his fiction, most of them ultimately come across as desirable fantasy objects for a certain kind of man. (The one really likable, compelling female character in his work, aside from Podkayne of Mars and Hazel Stone in The Rolling Stones, is Cynthia Randall in “The Unpleasant Profession of Jonathan Hoag.”) He never entirely lost the didactic streak that undermines his first unpublished novel, For Us, the Living, even if he advanced so rapidly in craft that it didn’t really matter. His late novels are a mixed bag, but they were never anything less than intensely personal, and they could hardly have been written by anyone else. And it goes without saying—or maybe it doesn’t—that merely because Heinlein was the strongest writer, sentence by sentence, in the history of the genre, it doesn’t mean that he was right about everything, or even about most things. As you read his stories, you find yourself nodding in agreement, and it’s only later that you start to raise reasonable objections. A novel like Starship Troopers is so cunningly constructed around its central argument that it can take you a while to realize how completely the author has stacked the deck. Heinlein liked to say that he was only trying to inspire people to ask the right questions, which isn’t untrue, although it seems a little disingenuous. He’s the most interesting case study I know on the difference between artistic mastery and good advice. They aren’t always the same thing, but they aren’t mutually exclusive, either: they coincide some but not all of the time, which is why the reader has to pay close attention.

Tunnel in the Sky

If I wanted to give a new reader a showcase for Heinlein’s talents, I’d probably start with his early, wonderful novella “If This Goes On—,” but I’d also consider recommending a few of his juveniles. These are the twelve books that he wrote for Scribner’s between 1947 and 1958, and although they were originally intended for young adults, they exemplify most of his strengths and almost none of his flaws. Heinlein explicitly conceived them as an updated version of the Horatio Alger books that he had loved growing up, and his pedagogical tendencies are both fully indulged and totally charming. The moral precepts he’s trying to inculcate couldn’t be more straightforward: “Hard work is rewarded.” “Studying hard pays off, in happiness as well as in money.” “Stand on your own feet.” And because he saw a strong technical education as the royal road to the stars, these books amount to the best propaganda imaginable for a career in the sciences. They’re filled with the kind of lectures—how a spaceship works, the physics of zero gravity, the design of a spacesuit—that most writers are rightly discouraged from including, but which many readers like me secretly crave, and Heinlein serves them up with great style. There’s no question that they inspired countless young people to go into science and engineering, which makes me regret the fact that he deliberately excluded half of his potential audience:

I established what has continued to be my rule for writing for youngsters. Never write down to them. Do not simplify the vocabulary nor the intellectual concepts. To this I added subordinate rules: No real love interest and female characters should only be walk-ons.

You could justify this by saying that these books were marketed by the publisher toward boys anyway, and that most of them wouldn’t have patience for girls. But it still feels like a lost opportunity.

Of all the juveniles, my favorite is Tunnel in the Sky, which starts out by anticipating The Hunger Games or even Battle Royale, moves into Lord of the Flies territory, and winds up as something unforgettably strange and moving. But they’re all worth reading, except maybe the aptly titled Between Planets, a transitional book that plays like Asimov at his most indifferent. Rocket Ship Galileo sends Tom Swift to the moon; Space Cadet looks ahead to Starship Troopers, but also Ender’s Game; Red Planet is terrifically exciting, and provides the first instance in which the adults take over the story from the kids; Farmer in the Sky is flawless hard science fiction; Starman Jones and The Rolling Stones come the closest to the ideal of a boy’s book of adventure in space; The Star Beast is uneven, but appealingly peculiar; Time for the Stars is a great time-dilation story; Citizen of the Galaxy has a lot of fun updating Kipling’s Kim for the future; and Have Space Suit—Will Travel begins as a lark, then grows gradually deeper and more resonant, to the point where I’m halfway convinced that it was one of Madeline L’Engle’s primary inspirations for A Wrinkle in Time. Heinlein’s uncanny ability to follow his imagination into odd byways without losing momentum, which is possibly his most impressive trick, is never on greater display than it is here. The best sequences, as in Starship Troopers, often take place in what amounts to basic training, and many of the juveniles fall into the same curious pattern: after a hundred fascinating pages about the hero’s education, there’s a sense of loss when the actual plot kicks in, as when Rocket Ship Galileo settles for a third act about Nazis in space. We’ve seen most of these crises before, and other writers, as well as Heinlein, will give us plenty of space battles and close escapes. But we’ve never been educated this well.

“If she was going to run, it had to be now…”

leave a comment »

"Maddy only nodded..."

Note: This post is the fifty-sixth installment in my author’s commentary for Eternal Empire, covering Chapter 55. You can read the previous installments here.

In general, an author should try to write active protagonists in fiction, for much the same reason that it’s best to use the active voice, rather than the passive, whenever you can. It isn’t invariably the right choice, but it’s better often enough that it makes sense to use it when you’re in doubt—which, when you’re writing a story, is frankly most of the time. In The Elements of Style, Strunk and Write list the reasons why the active voice is usually superior: it’s more vigorous and direct, it renders the writing livelier and more emphatic, and it often makes the sentence shorter. It’s a form of insurance that guards against some of the vices to which writers, even experienced ones, are prone to succumbing. There are few stories that wouldn’t benefit from an infusion of force, and since our artistic calculations are always imprecise, a shrewd writer will do what he or she can to err on the side of boldness. This doesn’t mean that the passive voice doesn’t have a place, but John Gardner’s advice in The Art of Fiction, as usual, is on point:

The passive voice is virtually useless in fiction…Needless to say, the writer must judge every case individually, and the really good writer may get away with just about anything. But it must be clear that when the writer makes use of the passive he knows he’s doing it and has good reason for what he does.

And most of the same arguments apply to active characters. All else being equal, an active hero or villain is more engaging than a passive victim of circumstance, and when you’re figuring out a plot, it’s prudent to construct the events whenever possible so that they emerge from the protagonist’s actions. (Or, even better, to come up with an active, compelling central character and figure out what he or she would logically do next.) This is the secret goal behind the model of storytelling, as expounded most usefully by David Mamet in On Directing Film, that conceives of a plot as a series of objectives, each one paired with a concrete action. It’s designed to maintain narrative clarity, but it also results in characters who want things and who take active measures to attain them. When I follow the slightly mechanical approach of laying out the objectives and actions of a scene, one beat after another, it gives the story a crucial backbone, but it also usually leads to the creation of an interesting character, almost by accident. If nothing else, it forces me to think a little harder, and it ensures that the building blocks of the story itself—which are analogous, but not identical, to the sentences that compose it—are written in the narrative equivalent of the active voice. And just as the active voice is generally preferable to the passive voice, in the absence of any other information, it’s advisable to focus on the active side when you aren’t sure what kind of story you’re writing: in the majority of cases, it’s simply more effective.

"If she was going to run, it had to be now..."

Of course, there are times when passivity is an important part of the story, just as the passive voice can be occasionally necessary to convey the ideas that the writer wants to express. The world is full of active and passive personalities, and of people who don’t have control over important aspects of their lives, and there’s a sense in which plots—or genres as a whole—that are built around action leave meaningful stories untold. This is true of the movies as well, as David Thomson memorably observes:

So many American films are pledged to the energy that “breaks out.” Our stories promote the hope of escape, of beginning again, of beneficial disruptions. One can see that energy—hopeful, and often damaging, but always romantic—in films as diverse as The Searchers, Citizen Kane, Mr. Smith Goes to Washington, Run of the Arrow, Rebel Without a Cause, Vertigo, Bonnie and Clyde, Greed, and The Fountainhead. No matter how such stories end, explosive energy is endorsed…Our films are spirals of wish fulfillment, pleas for envy, the hustle to get on with the pursuit of happiness.

One of the central goals of modernist realism has been to give a voice to characters who would otherwise go unheard, precisely because of their lack of conventional agency. And it’s a problem that comes up even in suspense: a plot often hinges on a character’s lack of power, less as a matter of existential helplessness than because of a confrontation with a formidable antagonist. (A conspiracy novel is essentially about that powerlessness, and it emerged as a subgenre largely as a way to allow suspense to deal with these issues.)

So how do you tell a story, or even write a scene, in which the protagonist is powerless? A good hint comes from Kurt Vonnegut, who wrote: “I don’t praise plots as accurate representations of life, but as ways to keep readers reading. When I used to teach creative writing, I would tell the students to make their characters want something right away—even if it’s only a glass of water. Characters paralyzed by the meaninglessness of modern life still have to drink water from time to time.” This draws a useful distinction, I think, between the two functions of the active mode: as a reflection of reality and as a tool to structure the reader’s experience. You can use it in the latter sense even in stories or scenes in which helplessness is the whole point, just as you can use the active voice to increase the impact of prose that is basically static or abstract. In Chapter 55 of Eternal Empire, for example, Maddy finds herself in as vulnerable a position as can be imagined: she’s in the passenger seat of a car being driven by a woman whom she’s just realized is her mortal enemy. There isn’t much she can plausibly do to defend herself, but to keep her from becoming entirely passive, I gave her a short list of actions to perform: she checks her pockets for potential weapons, unlocks the door on her side as quietly as she can, and looks through the windshield to get a sense of their location. Most crucially, at the moment when it might be possible to run, she decides to stay where is. The effect is subtle, but real. Maddy isn’t in control of her situation, but she’s in control of herself, and I think that the reader senses this. And it’s in scenes like this, when the action is at a minimum, that the active mode really pays off…

%d bloggers like this: