Alec Nevala-Lee

Thoughts on art, creativity, and the writing life.

Posts Tagged ‘The Atlantic

Quote of the Day

leave a comment »

Written by nevalalee

March 6, 2018 at 7:30 am

Posted in Quote of the Day

Tagged with ,

The pursuit of trivia

leave a comment »

Over the last few months, my wife and I have been obsessively playing HQ Trivia, an online game show that until recently was available only on Apple devices. If you somehow haven’t encountered it by now, it’s a live video broadcast, hosted by the weirdly ingratiating comedian Scott Rogowsky, in which players are given the chance to answer twelve multiple-choice questions. If you get one wrong, you’re eliminated, but if you make it to the end, you split the prize—which ranges from a few hundred to thousands of dollars—with the remaining contestants. Early on, my wife and I actually made it to the winner’s circle four times, earning a total of close to fifty bucks. (Unfortunately, the game’s payout minimum means that we currently have seventeen dollars that we can’t cash out until we’ve won again, which at this point seems highly unlikely.) That was back when the pool of contestants on a typical evening consisted of fewer than ten thousand players. Last night, there were well over a million, which set a new record. To put that number in perspective, that’s more than twice the number of people who watched the first airing of the return of Twin Peaks. It’s greater than the viewership of the average episode of Girls. In an era when many of us watch even sporting events, award ceremonies, or talk shows on a short delay, HQ Trivia obliges its viewers to pay close attention at the same time for ten minutes or more at a stretch. And we’re at a point where it feels like a real accomplishment to force any live audience, which is otherwise so balkanized and diffused, to focus on this tiny node of content.

Not surprisingly, the game has inspired a certain amount of curiosity about its ultimate intentions. It runs no advertisements of any kind, with a prize pool funded entirely by venture capital. But its plans aren’t exactly a mystery. As the reporter Todd Spangler writes in Variety:

So how do HQ Trivia’s creators plan to make money, instead of just giving it away? [Co-founder Rus] Yusupov said monetization is not currently the company’s focus. That said, it’s “getting a ton of interest from brands and agencies who want to collaborate and do something fun,” he added. “If we do any brand integrations or sponsors, the focus will be on making it enhance the gameplay,” Yusupov said. “For a user, the worst thing is feeling like, ‘I’m being optimized—I’m the product now.’ We want to make a great game, and make it grow and become something really special.”

It’s worth remembering that this game launched only this past August, and that we’re at a very early stage in its development, which has shrewdly focused on increasing its audience without any premature attempts at turning a profit. Startups are often criticized for focusing on metrics like “clicks” or “eyeballs” without showing how to turn them into revenue, but for HQ, it makes a certain amount of sense—these are literal eyeballs, all demonstrably turned to the same screen at once, and it yields the closest thing that anyone has seen in years to a captive audience. When the time comes for it to approach sponsors, it’s going to present a compelling case indeed.

But the specter of a million users glued simultaneously to their phones, hanging on Scott Rogowsky’s every word, fills some onlookers with uneasiness. Rogowsky himself has joked on the air about the comparisons to Black Mirror, and several commentators have taken it even further. Ian Bogost says in The Atlantic:

Why do I feel such dread when I play? It’s not the terror of losing, or even that of being embarrassed for answering questions wrong in front of my family and friends…It’s almost as if HQ is a fictional entertainment broadcast, like the kind created to broadcast the Hunger Games in the fictional nation of Panem. There, the motion graphics, the actors portraying news or talk-show hosts, the sets, the chyrons—they impose the grammar of television in order to recreate it, but they contort it in order to emphasize that it is also fictional…HQ bears the same sincere fakery, but seems utterly unaware that it is doing so.

And Miles Surrey of The Ringer envisions a dark future, over a century from now, in which playing the app is compulsory:

Scott—or “Trill Trebek,” or simply “God”—is a messianic figure to the HQties, the collective that blindly worships him, and a dictatorial figure to the rest of us…I made it to question 17. My children will eat today…You need to delete HQ from your phones. What appears to be an exciting convergence of television and app content is in truth the start of something terrifying, irreparable, and dangerous. You are conditioned to stop what you’re doing twice a day and play a trivia game—that is just Phase 1.

Yet I suspect that the real reason that this game feels so sinister to some observers is that it marks a return to a phenomenon that we thought we’d all left behind, and which troubled us subconsciously in ways that we’re only starting to grasp. It’s appointment television. In my time zone, the game airs around eight o’clock at night, which happens to be when I put my daughter to bed. I never know exactly how long the process will take—sometimes she falls asleep at once, but she tends to stall—so I usually get downstairs to join my wife about five or ten minutes later. By that point, the game has begun, and I often hear her say glumly: “I got out already.” And that’s it. It’s over until the same time tomorrow. Even if there were a way to rewind, there’s no point, because the money has already been distributed and nothing else especially interesting happened. (The one exception was the episode that aired on the day that one of the founders threatened to fire Rogowsky in retaliation for a profile in The Daily Beast, which marked one of the few times that the show’s mask seemed to crack.) But believe it or not, this is how we all used to watch television. We couldn’t record, pause, or control what was on, which is a fact that my daughter finds utterly inexplicable whenever we stay in a hotel room. It was a collective experience, but we also conducted it in relative isolation, except from the people who were in the same room as we were. That’s true of HQ as well, which moves at such a high speed that it’s impossible to comment on it on social media without getting thrown off your rhythm. These days, many of us only watch live television together at shared moments of national trauma, and HQ is pointedly the opposite. It’s trivial, but we have no choice but to watch it at the exact same time, with no chance of saving, pausing, or sharing. The screen might be smaller, but otherwise, it’s precisely what many of us did for decades. And if it bothers us now, it’s only because we’ve realized how dystopian it was all along.

Written by nevalalee

January 15, 2018 at 9:20 am

The two hawks

with 2 comments

I spent much of yesterday thinking about Mike Pence and a few Israeli hawks, although perhaps not the sort that first comes to mind. Many of you have probably seen the excellent profile by McKay Coppins that ran this week in The Atlantic, which attempts to answer a question that is both simpler and more complicated than it might initially seem—namely how a devout Christian like Pence can justify hitching his career to the rise of a man whose life makes a mockery of all the ideals that most evangelicals claim to value. You could cynically assume that Pence, like so many others, has coldly calculated that Trump’s support on a few key issues, like abortion, outweighs literally everything else that he could say or do, and you might well be right. But Pence also seems to sincerely believe that he’s an instrument of divine will, a conviction that dates back at least to his successful campaign for the House of Representatives. Coppins writes:

By the time a congressional seat opened up ahead of the 2000 election, Pence was a minor Indiana celebrity and state Republicans were urging him to run. In the summer of 1999, as he was mulling the decision, he took his family on a trip to Colorado. One day while horseback riding in the mountains, he and Karen looked heavenward and saw two red-tailed hawks soaring over them. They took it as a sign, Karen recalled years later: Pence would run again, but this time there would be “no flapping.” He would glide to victory.

This anecdote caught my eye for reasons that I’ll explain in a moment, but this version leaves out a number of details. As far as I can determine, it first appears in an article that ran in Roll Call back in 2010. It mentions that Pence keeps a plaque on his desk that reads “No Flapping,” and it places the original incident, curiously, in Theodore Roosevelt National Park in North Dakota, not in Colorado:

“We were trying to make a decision as a family about whether to sell our house, move back home and make another run for Congress, and we saw these two red-tailed hawks coming up from the valley floor,” Pence says. He adds that the birds weren’t flapping their wings at all; instead, they were gliding through the air. As they watched the hawks, Pence’s wife told him she was onboard with a third run. “I said, ‘If we do it, we need to do it like those hawks. We just need to spread our wings and let God lift us up where he wants to take us,’” Pence remembers. “And my wife looked at me and said, ‘That’ll be how we do it, no flapping.’ So I keep that on my desk to remember every time my wings get sore, stop flapping.”

Neither article mentions it, but I’m reasonably sure that Pence was thinking of the verse in the Book of Job, which he undoubtedly knows well, that marks the only significant appearance of a hawk in the Bible: “Does the hawk fly by your wisdom, and stretch her wings toward the south?” As one commentary notes, with my italics added: “Aside from calling attention to the miraculous flight, this might refer to migration, or to the wonderful soaring exhibitions of these birds.”

Faithful readers of this blog might recall that earlier this year, I spent three days tracing the movements of a few hawks in the life of another singular figure—the Israeli psychic Uri Geller. In the book Uri, which presents its subject as a messianic figure who draws his telekinetic and precognitive abilities from extraterrestrials, the parapsychological researcher Andrija Puharich recounts a trip to Tel Aviv, where he quickly became convinced of Geller’s powers. While driving through the countryside on New Year’s Day of 1972, Puharich saw two white hawks, followed by others at his hotel two days later:

At times one of the birds would glide in from the sea right up to within a few meters of the balcony; it would flutter there in one spot and stare at me directly in the eyes. It was a unique experience to look into the piercing, “intelligent” eyes of a hawk. It was then that I knew I was not looking into the eyes of an earthly hawk. This was confirmed about 2pm when Uri’s eyes followed a feather, loosened from the hawk, that floated on an updraft toward the top of the Sharon Tower. As his eye followed the feather to the sky, he was startled to see a dark spacecraft parked directly over the hotel.

Geller said that the birds, which he incorrectly claimed weren’t native to Israel, had been sent to protect them. “I dubbed this hawk ‘Horus’ and still use this name each time he appears to me,” Puharich concludes, adding that he saw it on two other occasions. And according to Robert Anton Wilson’s book Cosmic Trigger, the following year, the writer Saul-Paul Sirag was speaking to Geller during an LSD trip when he saw the other man’s head turn into that of a “bird of prey.”

In my original posts, I pointed out that these stories were particularly striking in light of contemporaneous events in the Middle East—much of the action revolves around Geller allegedly receiving information from a higher power about a pending invasion of Israel by Egypt, which took place two years later, and Horus was the Egyptian god of war. (Incidentally, Geller, who is still around, predicted last year that Donald Trump would win the presidential election, based primarily on the fact that Trump’s name contains eleven letters. Geller has a lot to say about the number eleven, which, if you squint just right, looks a bit like two hawks perched side by side, their heads in profile.) And it’s hard to read about Pence’s hawks now without thinking about recent developments in that part of the world. Trump’s policy toward Israel is openly founded on his promises to American evangelicals, many of whom are convinced that the Jews have a role to play in the end times. Pence himself tiptoes right up to the edge of saying this in an interview quoted by Coppins: “My support for Israel stems largely from my personal faith. In the Bible, God promises Abraham, ‘Those who bless you I will bless, and those who curse you I will curse.’” Which might be the most revealing statement of all. The verse that I mentioned earlier is uttered by God himself, who speaks out of the whirlwind with an accounting of his might, which is framed as a sufficient response to Job’s lamentations. You could read it, if you like, as an argument that power justifies suffering, which might be convincing when presented by the divine presence, but less so by men willing to distort their own beliefs beyond all recognition for the sake of their personal advantage. And here’s how the passage reads in full:

Does the hawk fly by your wisdom, and spread its wings toward the south? Does the eagle mount up at your command, and make its nest on high? On the rock it dwells and resides, on the crag of the rock and the stronghold. From there it spies out the prey; its eyes observe from afar. Its young ones suck up blood; and where the slain are, there it is.

Written by nevalalee

December 6, 2017 at 9:04 am

The secret villain

leave a comment »

Note: This post alludes to a plot point from Pixar’s Coco.

A few years ago, after Frozen was first released, The Atlantic ran an essay by Gina Dalfonzo complaining about the moment—fair warning for a spoiler—when Prince Hans was revealed to be the film’s true villain. Dalfonzo wrote:

That moment would have wrecked me if I’d seen it as a child, and the makers of Frozen couldn’t have picked a more surefire way to unsettle its young audience members…There is something uniquely horrifying about finding out that a person—even a fictional person—who’s won you over is, in fact, rotten to the core. And it’s that much more traumatizing when you’re six or seven years old. Children will, in their lifetimes, necessarily learn that not everyone who looks or seems trustworthy is trustworthy—but Frozen’s big twist is a needlessly upsetting way to teach that lesson.

Whatever you might think of her argument, it’s obvious that Disney didn’t buy it. In fact, the twist in question—in which a seemingly innocuous supporting character is exposed in the third act as the real bad guy—has appeared so monotonously in the studio’s recent movies that I was already complaining about it a year and a half ago. By my count, the films that fall back on his convention include not just Frozen, but Wreck-It Ralph, Zootopia, and now the excellent Coco, which implies that the formula is spilling over from its parent studio to Pixar. (To be fair, it goes at least as far back as Toy Story 2, but it didn’t become the equivalent of the house style until about six or seven years ago.)

This might seem like a small point of storytelling, but it interests me, both because we’ve been seeing it so often and because it’s very different from the stock Disney approach of the past, in which the lines between good and evil were clearly demarcated from the opening frame. In some ways, it’s a positive development—among other things, it means that characters are no longer defined primarily by their appearance—and it may just be a natural instance of a studio returning repeatedly to a trick that has worked in the past. But I can’t resist a more sinister reading. All of the examples that I’ve cited come from the period since John Lasseter took over as the chief creative officer of Disney Animation Studios, and as we’ve recently learned, he wasn’t entirely what he seemed, either. A Variety article recounts:

For more than twenty years, young women at Pixar Animation Studios have been warned about the behavior of John Lasseter, who just disclosed that he is taking a leave due to inappropriate conduct with women. The company’s cofounder is known as a hugger. Around Pixar’s Emeryville, California, offices, a hug from Lasseter is seen as a mark of approval. But among female employees, there has long been widespread discomfort about Lasseter’s hugs and about the other ways he showers attention on young women…“Just be warned, he likes to hug the pretty girls,” [a former employee] said she was told. “He might try to kiss you on the mouth.” The employee said she was alarmed by how routine the whole thing seemed. “There was kind of a big cult around John,” she says.

And a piece in The Hollywood Reporter adds: “Sources say some women at Pixar knew to turn their heads quickly when encountering him to avoid his kisses. Some used a move they called ‘the Lasseter’ to prevent their boss from putting his hands on their legs.”

Of all the horror stories that have emerged lately about sexual harassment by men in power, this is one of the hardest for me to read, and it raises troubling questions about the culture of a company that I’ve admired for a long time. (Among other things, it sheds a new light on the Pixar motto, as expressed by Andrew Stanton, that I’ve quoted here before: “We’re in this weird, hermetically sealed freakazoid place where everybody’s trying their best to do their best—and the films still suck for three out of the four years it takes to make them.” But it also goes without saying that it’s far easier to fail repeatedly on your way to success if you’re a white male who fits a certain profile. And these larger cultural issues evidently contributed to the departure from the studio of Rashida Jones and her writing partner.) It also makes me wonder a little about the movies themselves. After the news broke about Lasseter, there were comments online about his resemblance to Lotso in Toy Story 3, who announces jovially: “First thing you gotta know about me—I’m a hugger!” But the more I think about it, the more this seems like a bona fide inside joke about a situation that must have been widely acknowledged. As a recent article in Deadline reveals:

[Lasseter] attended some wrap parties with a handler to ensure he would not engage in inappropriate conduct with women, say two people with direct knowledge of the situation…Two sources recounted Lasseter’s obsession with the young character actresses portraying Disney’s Fairies, a product line built around the character of Tinker Bell. At the animator’s insistence, Disney flew the women to a New York event. One Pixar employee became the designated escort as Lasseter took the young women out drinking one night, and to a party the following evening. “He was inappropriate with the fairies,” said the former Pixar executive, referring to physical contact that included long hugs. “We had to have someone make sure he wasn’t alone with them.”

Whether or not the reference in Toy Story 3 was deliberate—the script is credited to Michael Arndt, based on a story by Lasseter, Stanton, and Lee Unkrich, and presumably with contributions from many other hands—it must have inspired a few uneasy smiles of recognition at Pixar. And its emphasis on seemingly benign figures who reveal an unexpected dark side, including Lotso himself, can easily be read as an expression, conscious or otherwise, of the tensions between Lasseter’s public image and his long history of misbehavior. (I’ve been thinking along similar lines about Kevin Spacey, whose “sheer meretriciousness” I identified a long time ago as one of his most appealing qualities as an actor, and of whom I once wrote here: “Spacey always seems to be impersonating someone else, and he does the best impersonation of a great actor that I’ve ever seen.” And it seems now that this calculated form of pretending amounted to a way of life.) Lasseter’s influence over Pixar and Disney is so profound that it doesn’t seem farfetched to see its films both as an expression of his internal divisions and of the reactions of those around him, and you don’t need to look far for parallel examples. My daughter, as it happens, knows exactly who Lasseter is—he’s the big guy in the Hawaiian shirt who appears at the beginning of all of her Hayao Miyazaki movies, talking about how much he loves the film that we’re about to see. I don’t doubt that he does. But not only do Miyazaki’s greatest films lack villains entirely, but the twist generally runs in the opposite direction, in which a character who initially seems forbidding or frightening is revealed to be kinder than you think. Simply on the level of storytelling, I know which version I prefer. Under Lasseter, Disney and Pixar have produced some of the best films of recent decades, but they also have their limits. And it only stands to reason that these limitations might have something to do with the man who was more responsible than anyone else for bringing these movies to life.

Written by nevalalee

November 30, 2017 at 8:27 am

The notebook and the brain

leave a comment »

A little over two decades ago, the philosophers Andy Clark and David Chalmers published a paper titled “The Extended Mind.” Its argument, which no one who encounters it is likely to forget, is that the human mind isn’t confined to the bounds of the skull, but includes many of the tools and external objects that we use to think, from grocery lists to Scrabble tiles. The authors present an extended thought experiment about a man named Otto who suffers from Alzheimer’s disease, which obliges him to rely on his notebook to remember how to get to a museum. They argue that this notebook is effectively occupying the role of Otto’s memory, but only because it meets a particular set of criteria:

First, the notebook is a constant in Otto’s life—in cases where the information in the notebook would be relevant, he will rarely take action without consulting it. Second, the information in the notebook is directly available without difficulty. Third, upon retrieving information from the notebook he automatically endorses it. Fourth, the information in the notebook has been consciously endorsed at some point in the past, and indeed is there as a consequence of this endorsement.

The authors conclude: “The information in Otto’s notebook, for example, is a central part of his identity as a cognitive agent. What this comes to is that Otto himself is best regarded as an extended system, a coupling of biological organism and external resources…Once the hegemony of skin and skull is usurped, we may be able to see ourselves more truly as creatures of the world.”

When we think and act, we become agents that are “spread into the world,” as Clark and Chalmers put it, and this extension is especially striking during the act of writing. In an article that appeared just  last week in The Atlantic, “You Think With the World, Not Just Your Brain,” Sam Kriss neatly sums up the problem: “Language sits hazy in the world, a symbolic and intersubjective ether, but at the same time it forms the substance of our thought and the structure of our understanding. Isn’t language thinking for us?” He continues:

This is not, entirely, a new idea. Plato, in his Phaedrus, is hesitant or even afraid of writing, precisely because it’s a kind of artificial memory, a hypomnesis…Writing, for Plato, is a pharmakon, a “remedy” for forgetfulness, but if taken in too strong a dose it becomes a poison: A person no longer remembers things for themselves; it’s the text that remembers, with an unholy autonomy. The same criticisms are now commonly made of smartphones. Not much changes.

The difference, of course, is that our own writing implies the involvement of the self in the past, which is a dialogue that doesn’t exist when we’re simply checking information online. Clark and Chalmers, who wrote at a relatively early stage in the history of the Internet, are careful to make this distinction: “The Internet is likely to fail [the criteria] on multiple counts, unless I am unusually computer-reliant, facile with the technology, and trusting, but information in certain files on my computer may qualify.” So can the online content that we make ourselves—I’ve occasionally found myself checking this blog to remind myself what I think about something, and I’ve outsourced much of my memory to Google Photos.

I’ve often written here about the dialogue between our past, present, and future selves implicit in the act of writing, whether we’re composing a novel or jotting down a Post-It note. Kriss quotes Jacques Derrida on the humble grocery list: “At the very moment ‘I’ make a shopping list, I know that it will only be a list if it implies my absence, if it already detaches itself from me in order to function beyond my ‘present’ act and if it is utilizable at another time.” And I’m constantly aware of the book that I’m writing as a form of time travel. As I mentioned last week, I’m preparing the notes, which means that I often have to make sense of something that I wrote down over two years ago. There are times when the presence of that other self is so strong that it feels as if he’s seated next to me, even as I remain conscious of the gap between us. (For one thing, my past self didn’t know nearly as much about John W. Campbell.) And the two of us together are wiser, more effective, and more knowledgeable than either one of us alone, as long as we have writing to serve as a bridge between us. If a notebook is a place for organizing information that we can’t easily store in our heads, that’s even more true of a book written for publication, which serves as a repository of ideas to be manipulated, rearranged, and refined over time. This can lead to the odd impression that your book somehow knows more than you do, which it probably does. Knowledge is less about raw data than about the connections between them, and a book is the best way we have for compiling our moments of insight in a form that can be processed more or less all at once. We measure ourselves against the intelligence of authors in books, but we’re also comparing two fundamentally different things. Whatever ideas I have right now on any given subject probably aren’t as good as a compilation of everything that occurred to my comparably intelligent double over the course of two or three years.

This implies that most authors are useful not so much for their deeper insights as for their greater availability, which allows them to externalize their thoughts and manipulate them in the real world for longer and with more intensity than their readers can. (Campbell liked to remind his writers that the magazine’s subscribers were paying them to think on their behalf.) I often remember one of my favorite anecdotes about Isaac Asimov, which he shares in the collection Opus 100. He was asked to speak on the radio on nothing less than the human brain, on which he had just published a book. Asimov responded: “Heavens! I’m not a brain expert.” When the interviewer pointed out that he had just written an entire book on the subject, Asimov explained:

“Yes, but I studied up for the book and put in everything I could learn. I don’t know anything but the exact words in the book, and I don’t think I can remember all those in a pinch. After all,” I went on, a little aggrieved, “I’ve written books on dozens of subjects. You can’t expect me to be expert on all of them just because I’ve written books about them.”

Every author can relate to this, and there are times when “I don’t know anything but the exact words in the book” sums up my feelings about my own work. Asimov’s case is particularly fascinating because of the scale involved. By some measures, he was the most prolific author in American history, with over four hundred books to his credit, and even if we strip away the anthologies and other works that he used to pad the count, it’s still a huge amount of information. To what extent was Asimov coterminous with his books? The answer, I think, lies somewhere between “Entirely” and “Not at all,” and there was presumably more of Asimov in his memoirs than in An Easy Introduction to the Slide Rule. But he’s only an extreme version of a phenomenon that applies to every last one of us. When the radio interviewer asked incredulously if he was an expert on anything, Asimov responded: “I’m an expert on one thing. On sounding like an expert.” And that’s true of everyone. The notes that we take allow us to pose as experts in the area that matters the most—on the world around us, and even our own lives.

Written by nevalalee

October 25, 2017 at 8:40 am

The men who sold the moonshot

with 3 comments

When you ask Google whether we should build houses on the ocean, it gives you a bunch of results like these. If you ask Google X, the subsidiary within the company responsible for investigating “moonshot” projects like self-driving cars and space elevators, the answer that you get is rather different, as Derek Thompson reports in the cover story for this month’s issue of The Atlantic:

Like a think-tank panel with the instincts of an improv troupe, the group sprang into an interrogative frenzy. “What are the specific economic benefits of increasing housing supply?” the liquid-crystals guy asked. “Isn’t the real problem that transportation infrastructure is so expensive?” the balloon scientist said. “How sure are we that living in densely built cities makes us happier?” the extradimensional physicist wondered. Over the course of an hour, the conversation turned to the ergonomics of Tokyo’s high-speed trains and then to Americans’ cultural preference for suburbs. Members of the team discussed commonsense solutions to urban density, such as more money for transit, and eccentric ideas, such as acoustic technology to make apartments soundproof and self-driving housing units that could park on top of one another in a city center. At one point, teleportation enjoyed a brief hearing.

Thompson writes a little later: “I’d expected the team at X to sketch some floating houses on a whiteboard, or discuss ways to connect an ocean suburb to a city center, or just inform me that the idea was terrible. I was wrong. The table never once mentioned the words floating or ocean. My pitch merely inspired an inquiry into the purpose of housing and the shortfalls of U.S. infrastructure. It was my first lesson in radical creativity. Moonshots don’t begin with brainstorming clever answers. They start with the hard work of finding the right questions.”

I don’t know why Thompson decided to ask about “oceanic residences,” but I read this section of the article with particular interest, because about two years ago, I spent a month thinking about the subject intensively for my novella “The Proving Ground.” As I’ve described elsewhere, I knew early on in the process that it was going to be a story about the construction of a seastead in the Marshall Islands, which was pretty specific. There was plenty of background material available, ranging from general treatments of the idea in books like The Millennial Project by Marshall T. Savage—which had been sitting unread on my shelf for years—to detailed proposals for seasteads in the real world. The obvious source was The Seasteading Institute, a libertarian pipe dream funded by Peter Thiel that generated a lot of useful plans along the way, as long as you saw it as the legwork for a science fiction story, rather than as a project on which you were planning to actually spend fifty billion dollars. The difference between most of these proposals and the brainstorming session that Thompson describes is that they start with a floating city and then look for reasons to justify it. Seasteading is a solution in search of a problem. In other words, it’s science fiction, which often starts with a premise or setting that seems like it would lead to an exciting story and then searches for the necessary rationalizations. (The more invisible the process, the better.) And this can lead us to troubling places. As I’ve noted before, Thiel blames many of this country’s problems on “a failure of imagination,” and his nostalgia for vintage science fiction is rooted in a longing for the grand gestures that it embodied: the flying car, the seastead, the space colony. As he famously said six years ago to The New Yorker: “The anthology of the top twenty-five sci-fi stories in 1970 was, like, ‘Me and my friend the robot went for a walk on the moon,’ and in 2008 it was, like, ‘The galaxy is run by a fundamentalist Islamic confederacy, and there are people who are hunting planets and killing them for fun.'”

Google X isn’t immune to this tendency—Google Glass was, if anything, a solution in search of a problem—and some degree of science-fictional thinking is probably inherent to any such enterprise. In his article, Thompson doesn’t mention science fiction by name, but the whole division is clearly reminiscent of and inspired by the genre, down to the term “moonshot” and that mysterious letter at the end of its name. (Company lore claims that the “X” was chosen as “a purposeful placeholder,” but it’s hard not to think that it was motivated by the same impulse that gave us Dimension X, X Minus 1, Rocketship X-M, and even The X-Files.) In fact, an earlier article for The Atlantic looked at this connection in depth, and its conclusions weren’t altogether positive. Three years ago, in the same publication, Robinson Meyer quoted a passage from an article in Fast Company about the kinds of projects favored by Google X, but he drew a more ambivalent conclusion:

A lot of people might read that [description] and think: Wow, cool, Google is trying to make the future! But “science fiction” provides but a tiny porthole onto the vast strangeness of the future. When we imagine a “science fiction”-like future, I think we tend to picture completed worlds, flying cars, the shiny, floating towers of midcentury dreams. We tend, in other words, to imagine future technological systems as readymade, holistic products that people will choose to adopt, rather than as the assembled work of countless different actors, which they’ve always really been. The futurist Scott Smith calls these “flat-pack futures,” and they infect “science fictional” thinking.

He added: “I fear—especially when we talk about “science fiction”—that we miss the layeredness of the world, that many people worked to build it…Flying through space is awesome, but if technological advocates want not only to make their advances but to hold onto them, we have better learn the virtues of incrementalism.” (The contrast between Meyer’s skepticism and Thompson’s more positive take feels like a matter of access—it’s easier to criticize Google X’s assumptions when it’s being profiled by a rival magazine.)

But Meyer makes a good point, and science fiction’s mixed record at dealing with incrementalism is a natural consequence of its origins in popular fiction. A story demands a protagonist, which encourages writers to see scientific progress in terms of heroic figures. The early fiction of John W. Campbell returns monotonously to the same basic plot, in which a lone genius discovers atomic power and uses it to build a spaceship, drawing on the limitless resources of a wealthy and generous benefactor. As Isaac Asimov noted in his essay “Big, Big, Big”:

The thing about John Campbell is that he liked things big. He liked big men with big ideas working out big applications of their big theories. And he liked it fast. His big men built big weapons within days; weapons that were, moreover, without serious shortcomings, or at least, with no shortcomings that could not be corrected as follows: “Hmm, something’s wrong—oh, I see—of course.” Then, in two hours, something would be jerry-built to fix the jerry-built device.

This works well enough in pulp adventure, but after science fiction began to take itself seriously as prophecy, it fossilized into the notion that all problems can be approached as provinces of engineering and solved by geniuses working alone or in small groups. Elon Musk has been compared to Tony Stark, but he’s really the modern incarnation of a figure as old as The Skylark of Space, and the adulation that he still inspires shades into beliefs that are even less innocuous—like the idea that our politics should be entrusted to similarly big men. Writing of Google X’s Rapid Evaluation team, Thompson uses terms that would have made Campbell salivate: “You might say it’s Rapid Eval’s job to apply a kind of future-perfect analysis to every potential project: If this idea succeeds, what will have been the challenges? If it fails, what will have been the reasons?” Science fiction likes to believe that it’s better than average at this kind of forecasting. But it’s just as likely that it’s worse.

Written by nevalalee

October 11, 2017 at 9:02 am

The monotonous periodicity of genius

leave a comment »

Yesterday, I read a passage from the book Music and Life by the critic and poet W.J. Turner that has been on my mind ever since. He begins with a sentence from the historian Charles Sanford Terry, who says of Bach’s cantatas: “There are few phenomena in the record of art more extraordinary than this unflagging cataract of inspiration in which masterpiece followed masterpiece with the monotonous periodicity of a Sunday sermon.” Turner objects to this:

In my enthusiasm for Bach I swallowed this statement when I first met it, but if Dr. Terry will excuse the expression, it is arrant nonsense. Creative genius does not work in this way. Masterpieces are not produced with the monotonous periodicity of a Sunday sermon. In fact, if we stop to think we shall understand that this “monotonous periodicity ” was exactly what was wrong with a great deal of Bach’s music. Bach, through a combination of natural ability and quite unparalleled concentration on his art, had arrived at the point of being able to sit down at any minute of any day and compose what had all the superficial appearance of being a masterpiece. It is possible that even Bach himself did not know which was a masterpiece and which was not, and it is abundantly clear to me that in all his large-sized works there are huge chunks of stuff to which inspiration is the last word that one could apply.

All too often, Turner implies, Bach leaned on his technical facility when inspiration failed or he simply felt indifferent to the material: “The music shows no sign of Bach’s imagination having been fired at all; the old Leipzig Cantor simply took up his pen and reeled off this chorus as any master craftsman might polish off a ticklish job in the course of a day’s work.”

I first encountered the Turner quotation in The New Listener’s Companion and Record Guide by B.H. Haggin, who cites his fellow critic approvingly and adds: “This seems to me an excellent description of the essential fact about Bach—that one hears always the operation of prodigious powers of invention and construction, but frequently an operation that is not as expressive as it is accomplished.” Haggin continues:

Listening to the six sonatas or partitas for unaccompanied violin, the six sonatas or suites for unaccompanied piano, one is aware of Bach’s success with the difficult problem he set himself, of contriving for the instrument a melody that would imply its underlying harmonic progressions between the occasional chords. But one is aware also that solving this problem was not equivalent to writing great or even enjoyable music…I hear only Bach’s craftsmanship going through the motions of creation and producing the external appearances of expressiveness. And I suspect that it is the name of Bach that awes listeners into accepting the appearance as reality, into hearing an expressive content which isn’t there, and into believing that if the content is difficult to hear, this is only because it is especially profound—because it is “the passionate, yet untroubled meditation of a great mind” that lies beyond “the composition’s formidable technical frontiers.”

Haggins confesses that he regards many pieces in The Goldberg Variations or The Well-Tempered Clavier as “examples of competent construction that are, for me, not interesting pieces of music.” And he sums up: “Bach’s way of exercising the spirit was to exercise his craftsmanship; and some of the results offer more to delight an interest in the skillful use of technique than to delight the spirit.”

As I read this, I was inevitably reminded of Christopher Orr’s recent article in The Atlantic, “The Remarkable Laziness of Woody Allen,” which I discussed here last week. Part of Orr’s case against Allen involves “his frenetic pace of one feature film a year,” which can only be described as monotonous periodicity. This isn’t laziness, of course—it’s the opposite—but Orr implies that the director’s obsession with productivity has led him to cut corners in the films themselves: “Ambition simply isn’t on the agenda.” Yet the funny thing is that this approach to making art, while extreme, is perfectly rational. Allen writes, directs, and releases three movies in the time it would take most directors to finish one, and when you look at his box office and awards history, you see that about one in three breaks through to become a financial success, an Oscar winner, or both. And Orr’s criticism of this process, like Turner’s, could only have been made by a professional critic. If you’re obliged to see every Woody Allen movie or have an opinion on every Bach cantata, it’s easy to feel annoyed by the lesser efforts, and you might even wish that that the artist had only released the works in which his inspiration was at its height. For the rest of us, though, this really isn’t an issue. We get to skip Whatever Works or Irrational Man in favor of the occasional Match Point or Midnight in Paris, and most of us are happy if we can even recognize the cantata that has “Jesu, Joy of Man’s Desiring.” If you’re a fan, but not a completist, a skilled craftsman who produces a lot of technically proficient work in hopes that some of it will stick is following a reasonable strategy. As Malcolm Gladwell writes of Bach:

The difference between Bach and his forgotten peers isn’t necessarily that he had a better ratio of hits to misses. The difference is that the mediocre might have a dozen ideas, while Bach, in his lifetime, created more than a thousand full-fledged musical compositions. A genius is a genius, [Dean] Simonton maintains, because he can put together such a staggering number of insights, ideas, theories, random observations, and unexpected connections that he almost inevitably ends up with something great.

As Simonton puts it: “Quality is a probabilistic function of quantity.” But if there’s a risk involved, it’s that an artist will become so used to producing technically proficient material on a regular basis that he or she will fall short when the circumstances demand it. Which brings us back to Bach. Turner’s remarks appear in a chapter on the Mass in B minor, which was hardly a throwaway—it’s generally considered to be one of Bach’s major works. For Turner, however, the virtuosity expressed in the cantatas allowed Bach to take refuge in cleverness even when there was more at stake: “I say that the pretty trumpet work in the four-part chorus of the Gloria, for example, is a proof that Bach was being consciously clever and brightening up his stuff, and that he was not at that moment writing with the spontaneity of those really creative moments which are popularly called inspired.” And he writes of the Kyrie, which he calls “monotonous”:

It is still impressive, and no doubt to an academic musician, with the score in his hands and his soul long ago defunct, this charge of monotony would appear incredible, but then his interest is almost entirely if not absolutely technical. It is a source of everlasting amazement to him to contemplate Bach’s prodigious skill and fertility of invention. But what do I care for Bach’s prodigious skill? Even such virtuosity as Bach’s is valueless unless it expresses some ulterior beauty or, to put it more succinctly, unless it is as expressive as it is accomplished.

And I’m not sure that he’s even wrong. It might seem remarkable to make this accusation of Bach, who is our culture’s embodiment of technical skill as an embodiment of spiritual expression, but if the charge is going to have any weight at all, it has to hold at the highest level. William Blake once wrote: “Mechanical excellence is the only vehicle of genius.” He was right. But it can also be a vehicle, by definition, for literally everything else. And sometimes the real genius lies in being able to tell the difference.

%d bloggers like this: