Posts Tagged ‘New York Times’
Over the weekend, the New York Times published an opinion piece by the writer Moises Velasquez-Manoff titled “What Biracial People Know.” Velasquez-Manoff, who, like me, is multiracial, makes many of the same points that I once did in a previous post on the subject, as when he writes: “I can attest that being mixed makes it harder to fall back on the tribal identities that have guided so much of human history, and that are now resurgent…You’re also accustomed to the idea of having several selves, and of trying to forge them into something whole.” He also highlights a lot of research of which I wasn’t previously aware, the most interesting being a study of facial recognition in multiracial babies:
By three months of age, biracial infants recognize faces more quickly than their monoracial peers, suggesting that their facial perception abilities are more developed. Kristin Pauker, a psychologist at the University of Hawaii at Manoa and one of the researchers who performed this study, likens this flexibility to bilingualism. Early on, infants who hear only Japanese, say, will lose the ability to distinguish L’s from R’s. But if they also hear English, they’ll continue to hear the sounds as separate. So it is with recognizing faces, Dr. Pauker says. Kids naturally learn to recognize kin from non-kin, in-group from out-group. But because they’re exposed to more human variation, the in-group for multiracial children seems to be larger.
As it happens, I’m terrible at remembering faces, so any advantage I once gained along those lines has long since faded away. But such findings are still intriguing, and they hint temptingly at broader conclusions. As Velasquez-Manoff says of our first biracial president: “His multitudinous self was, I like to think, part of what made him great.”
For obvious reasons, I’m wary of applying generalizations to any ethnic or racial group, including my own. But there’s something intuitively appealing about the notion that multiracial individuals are forced to develop certain advantageous forms of thinking in order to adapt. They don’t have a monopoly on the problem of forging an identity and figuring out the world around them, which, as Velasquez-Manoff notes, is “a defining experience of modernity.” But isn’t hard to believe that they might have a slight head start. If you’re exposed to greater facial variety as an infant, the reasoning goes, you’ll acquire the skills that allow you to distinguish between individuals just a little bit earlier, and you can easily imagine how that small advantage might grow over time. (Although, by the same logic, babies surrounded by faces with similar racial characteristics might become better at distinguishing between slight variations. I’d be curious to know if this has ever been tested.) If there’s a theme here, it’s that multiracial people are shaped by a more intensive version of an experience common to all human beings. Velasquez-Manoff writes:
In a 2015 study, Sarah Gaither, an assistant professor at Duke, found that when she reminded multiracial participants of their mixed heritage, they scored higher in a series of word association games and other tests that measure creative problem solving. When she reminded monoracial people about their heritage, however, their performance didn’t improve…[But] when Dr. Gaither reminded participants of a single racial background that they, too, had multiple selves, by asking about their various identities in life, their scores also improved. “For biracial people, these racial identities are very salient,” she told me. “That said, we all have multiple social identities.”
In other words, we’re all living with these issues, and multiracial just people have to exercise those skills earlier and more often.
Yet I also need to tread carefully here, precisely because these conclusions are just the ones that somebody like me would like to believe. (When you extend these arguments to social patterns, which is a big leap in itself, you also get tripped up by problems of cause and effect. When Velasquez-Manoff writes that “cities and countries that are more diverse are more prosperous than homogeneous ones,” he doesn’t point out that the causal arrow might well run the other way.) Last week, in my post about the replication crisis in psychology, I noted that experiments that confirm what feels like common sense—or that allow us to score easy points against the Trump administration—are less likely to be scrutinized than others, and many of the studies that Velasquez-Manoff mentions here sound a lot like the kind that have proven hard to duplicate. At Harvard and Tel Aviv University, for instance, subjects “read essays that made an essentialist argument about race, and then [were asked] to solve word-association games and other puzzles.” The study found that participants who were “primed” with stereotypes performed less well on such tests than those who weren’t, and it concluded: “An essentialist mindset is indeed hazardous for creativity.” That seems all too reasonable. But the insidious ways in which race pervades our lives bear little resemblance to reading an essay and solving a word puzzle. Maybe multiracial people do, in fact, score higher on such tests when reminded of their mixed heritage, at least when it takes the form, as it did at Duke, of writing essays about their identities. But on an everyday basis, that “reminder” is more likely to take the form of being miscategorized and mispronounced, filling out forms that only allow one racial box to be checked, feeling defined by otherness, and being asked by well-meaning strangers: “So where are you from?” For all I know, these social cues may be equally conductive to creativity. But I doubt that there’s ever been a study about it.
I’m not trying to criticize any specific study, and I’d love to embrace these findings—which is exactly why they need to be replicated. The problem of race is so pervasive and resistant to definition that it makes the average psychological experiment, with its clinical settings and word tests, seem all the more removed from reality. And multiracial people need to be conscious of the slippery slope involved in making any kind of claim about the uniqueness of their experience. (There’s also the huge, unstated point that what it means to be multiracial differs dramatically from one combination of races to another. If you look a certain way, that’s how you’re going to be treated, no matter how diverse your genetic background might be.) Velasquez-Manoff sees these studies as an argument in favor of diversity, which is certainly a case worth making. But creativity is just one factor in human life, and you don’t need to look far to sense the equally great advantages in being a member of a homogenous racial, ethnic, or cultural group, particularly one that has been historically empowered. Tradition is a convenient crystallization of the experiences of the past, and most of us spend our lives falling back on the solutions that people who look like us have provided, whether it’s in politics, society, or religion. Such attitudes wouldn’t persist if they weren’t more than adequate in the vast majority of situations. Creativity is a last resort, a survival mechanism adopted by those who feel excluded from the larger community, unable to rely on the rules that others follow unquestioningly, and forced to improvise tactics in real time. It doesn’t always go well. Creative types are often miserable and frustrated, particularly in a world that runs the most smoothly on monolithic categories. There are times when all your cleverness can’t help you. And that’s what biracial people really know.
Note: I’m taking a few days off, so I’ll be republishing some of my favorite posts from earlier in this blog’s run. This post originally appeared, in a somewhat different form, on November 6, 2012.
I never wanted to be a moderate. Growing up, and especially in college, I believed in coming down strongly on one side or the other of any particular issue, and was drawn to the people around me who embraced similar extremes. I didn’t know much, but I knew that I wanted to be a writer, which to my eyes represented a clear choice between the compromises of an ordinary existence and a willingness to risk everything for the life of art. My favorite classical hero was the Achilles of the Iliad, who might waver or sulk into prolonged inaction, but always saw the world around him in stark terms, with cosmic emotions that refused to be bound by the standards of the society in which he lived. And although I hadn’t read On the Road, I suspect that I might have agreed with Kerouac’s initially inspiring and then increasingly annoying insistence that the only true people were the ones who burn “like fabulous yellow roman candles exploding like spiders across the stars.”
No one has ever compared a moderate to a roman candle, fabulous or otherwise. Yet as time went on, my views began to change. In many ways, this was just part of the process of growing up, which tends to nudge most of us toward the center, on the way to the natural conservatism of old age. But it also had something to do with the realities of becoming a writer. Writing for a living, at least on a daily basis, is less about staking out a bold claim into the unknown than about coming to terms with many small compromises. It’s tactical, not strategic, and encourages a natural pragmatism, at least for those of us who want to write more than a couple of novels. You learn to deal with problems as they occur, and a solution that works in a particular situation may no longer make sense when it comes up again. Above all else, as a writer, you need to figure out a way of life that is mostly free of hard external dislocations, which are murder on any kind of artistic productivity. Hence my favorite writing quote of all time, from Flaubert: “Be well-ordered in your life, and as ordinary as a bourgeois, in order to be violent and original in your work.”
All these things tend to encourage a kind of reasonable moderation, at least on the outside—there’s a reason why most writers have boring biographies. And in my own case, it also shapes the way I see the rest of the world. There aren’t a lot of clear answers in ethics or politics, and as much as we’d all like to be consistent, dealing with reality, like writing fiction, is more likely to impose a series of increasingly messy workarounds. A novel forces you to deal with issues of character, behavior, and society in a laboratory setting, and even when you control the terms of the experiment, the answers that you get are rarely the ones you set out to find. In a defense of moderate thinking in the New York Times, David Brooks once wrote: “This idea—that you base your agenda on your specific situation—may seem obvious, but immoderate people often know what their solutions are before they define the problems.” And this describes bad fiction as well as bad politics.
As a result, my own politics are sort of a hodgepodge, and like my fiction, they’ve been deeply shaped by the particulars of my life story. I’m a multicultural agnostic who has spent much of his life under the spell of various dead white males. Not surprisingly, my strongest political conviction remains that of the power of free speech, but I’ve also got a weird survivalist streak that once left me more neutral on issues like gun control—although I’ve since changed my mind about this. I spent years working in finance, and I mostly believe in the positive power of capitalism and free markets, but I also think that it leads to conditions of inequality that the government needs to address, for the good of the system as a whole. And I could go on. But the bottom line is that I’ve found that a writer, and maybe a citizen, needs to be less like Achilles than Odysseus: adaptable, pragmatic, capable of changing his plans when necessary, but always with an eye to finding his way home, even if it takes far longer than he hoped.
Note: I’m taking a short break this week, so I’ll be republishing a few posts from earlier in this blog’s run. This post originally appeared, in a slightly different form, on July 22, 2015.
The late E.L. Doctorow belonged to a select group of writers, including Toni Morrison, who were editors before they were novelists. When asked how his former vocation had influenced his work, he said:
Editing taught me how to break books down and put them back together. You learn values—the value of tension, of keeping tension on the page and how that’s done, and you learn how to spot self-indulgence, how you don’t need it. You learn how to become very free and easy about moving things around, which a reader would never do. A reader sees a printed book and that’s it. But when you see a manuscript as an editor, you say, Well this is chapter twenty, but it should be chapter three. You’re at ease in the book the way a surgeon is at ease in a human chest, with all the blood and guts and everything. You’re familiar with the material and you can toss it around and say dirty things to the nurse.
Doctorow—who had the word “doctor” right there in his name—wasn’t the first author to draw a comparison between writing and medicine, and in particular to surgery, which has a lot of metaphorical affinities with the art of fiction. It’s half trade school and half priesthood, with a vast body of written and unwritten knowledge, and as Atul Gawande has pointed out, even the most experienced practitioners can benefit from the use of checklists. What draws most artists to the analogy, though, is the surgeon’s perceived detachment and lack of sentimentality, and the idea that it’s a quality that can be acquired with sufficient training and experience. The director Peter Greenaway put it well:
I always think that if you deal with extremely emotional, even melodramatic, subject matter, as I constantly do, the best way to handle those situations is at a sufficient remove. It’s like a doctor and a nurse and a casualty situation. You can’t help the patient and you can’t help yourself by emoting.
And the primary difference, aside from the stakes involved, is that the novelist is constantly asked, like the surgeon in the famous brainteaser, to operate on his or her own child.
Closely allied to the concept of surgical detachment is that of a particular intuition, the kind that comes after craft has been internalized to the point where it no longer needs to be consciously remembered. As Wilfred Trotter wrote: “The second thing to be striven for [by a doctor] is intuition. This sounds an impossibility, for who can control that small quiet monitor? But intuition is only inference from experience stored and not actively recalled.” Intuition is really a way of reaching a conclusion after skipping over the intermediate steps that rational thought requires—or what Robert Graves calls proleptic thinking—and it evolved as a survival response to situations where time is at a premium. Both surgeons and artists are called upon to exercise uncanny precision at moments of the highest tension, and the greater the stress, the greater the exactitude required. As John Ruskin puts it:
There is but one question ultimately to be asked respecting every line you draw: Is it right or wrong? If right, it most assuredly is not a “free” line, but an intensely continent, restrained and considered line; and the action of the hand in laying it is just as decisive, and just as “free” as the hand of a first-rate surgeon in a critical incision.
Surgeons, of course, are as human as anybody else. In an opinion piece published last year in the New York Times, the writer and cardiologist Sandeep Jauhar argued that the widespread use of surgical report cards has had a negative impact on patient care: skilled surgeons who are aggressive about treating risky cases are penalized, or even stripped of their operating privileges, while surgeons who play it safe by avoiding very sick patients maintain high ratings. It isn’t hard to draw a comparison to fiction, where a writer who consistently takes big risks can end up with less of a career than one who sticks to proven material. (As an unnamed surgeon quoted by Jahuar says: “The so-called best surgeons are only doing the most straightforward cases.”) And while it may seem like a stretch to compare a patient of flesh and blood to the fictional men and women on which a writer operates, the stakes are at least analogous. Every project represents a life, or a substantial part of one: it’s an investment of effort drawn from the finite, and nonrenewable, pool of time that we’ve all been granted. When a novelist is faced with saving a manuscript, it’s not just a stack of pages, but a year of one’s existence that might feel like a loss if the operation isn’t successful. Any story is a slice of mortality, distilled to a physical form that runs the risk of disappearing without a trace if we can’t preserve it. And our detachment here is precious, even essential, because the life we’ve been asked to save is our own.
In The Biographical Dictionary of Film, David Thomson says of Tuesday Weld: “If she had been ‘Susan Weld’ she might now be known as one of our great actresses.” The same point might hold true of George Michael, who was born Georgios Kyriacos Panayiotou and chose a nom de mike—with its unfortunate combination of two first names—that made him seem frothy and lightweight. If he had called himself, say, George Parker, he might well have been regarded as one of our great songwriters, which he indisputably was. In the past, I’ve called Tom Cruise a brilliant producer who happened to be born into the body of a movie star, and George Michael had the similar misfortune of being a perversely inventive and resourceful recording artist who was also the most convincing embodiment of a pop superstar that anybody had ever seen. It’s hard to think of another performer of that era who had so complete a package: the look, the voice, the sexuality, the stage presence. The fact that he was gay and unable to acknowledge it for so long was an undeniable burden, but it also led him to transform himself into what would have been almost a caricature of erotic assertiveness if it hadn’t been delivered so earnestly. Like Cary Grant, a figure with whom he might otherwise seem to have little in common, he turned himself into exactly what he thought everyone wanted, and he did it so well that he was never allowed to be anything else.
But consider the songs. Michael was a superb songwriter from the very beginning, and “Everything She Wants,” “Last Christmas,” “Careless Whisper,” and “A Different Corner,” which he all wrote in his early twenties, should be enough to silence any doubts about his talent. His later songs could be exhausting in their insistence on doubling as statements of purpose. But it’s Faith, and particularly the first side of the album and the coda of “Kissing a Fool,” that never fails to fill me with awe. It was a clear declaration that this was a young man, not yet twenty-five, who was capable of anything, and he wasn’t shy about alerting us to the fact: the back of the compact disc reads “Written, Arranged, and Produced by George Michael.” In those five songs, Michael nimbly tackles so many different styles and tones that it threatens to make the creation of timeless pop music seem as mechanical a process as it really is. A little less sex and a lot more irony, and you’d be looking at as skilled a chameleon as Stephin Merritt—which is another comparison that I didn’t think I’d ever make. But on his best day, Michael was the better writer. “One More Try” has meant a lot to me since the moment I first heard it, while “I Want Your Sex” is one of those songs that would sound revolutionary in any decade. When you listen to the Monogamy Mix, which blends all three sections together into a monster track of thirteen minutes, you start to wonder if we’ve caught up to it even now.
These songs have been part of the background of my life for literally as long as I can remember—the music video for “Careless Whisper” was probably the first one I ever saw, except maybe for “Thriller,” and I can’t have been more than five years old. Yet I never felt like I understood George Michael in the way I thought I knew, say, the Pet Shop Boys, who also took a long time to get the recognition they deserved. (They also settled into their roles as elder statesmen a little too eagerly, while Michael never seemed comfortable with his cultural position at any age.) For an artist who told us what he thought in plenty of songs, he remained essentially unknowable. Part of it was due to that glossy voice, one of the best of its time, especially when it verged on Alison Moyet territory. But it often seemed like just another instrument, rather than a piece of himself. Unlike David Bowie, who assumed countless personas that still allowed the man underneath to peek through, Michael wore his fame, in John Updike’s words, like a mask that ate into the face. His death doesn’t feel like a personal loss to me, in the way that Bowie did, but I’ve spent just about as much time listening to his music, even if you don’t count all the times I’ve played “Last Christmas” in an endless loop on Infinite Jukebox.
In the end, it was a career that was bound to seem unfinished no matter when or how it ended. Its back half was a succession of setbacks and missed opportunities, and you could argue that its peak lasted for less than four years. The last album of his that I owned was the oddball Songs from the Last Century, in which he tried on a new role—a lounge singer of old standards—that would have been ludicrous if it hadn’t been so deeply heartfelt. It wasn’t a persuasive gesture, because he didn’t need to sing somebody else’s songs to sound like part of the canon. That was seventeen years ago, or almost half my lifetime. There were long stretches when he dropped out of my personal rotation, but he always found his way back: “Wake Me Up Before You Go-Go” even played at my wedding. “One More Try” will always be my favorite, but the snippet that has been in my head the most is the moment in “Everything She Wants” when Michael just sings: Uh huh huh / Oh, oh / Uh huh huh / Doo doo doo / La la la la… Maybe he’s just marking time, or he wanted to preserve a melodic idea that didn’t lend itself to words, or it was a reflection of the exuberance that Wesley Morris identifies in his excellent tribute in the New York Times: “There aren’t that many pop stars with as many parts of as many songs that are as exciting to sing as George Michael has—bridges, verses, the fillips he adds between the chorus during a fade-out.” But if I were trying to explain what pop music was all about to someone who had never heard it, I might just play this first.
Forty years ago, the cinematographer Garrett Brown invented the Steadicam. It was a stabilizer attached to a harness that allowed a camera operator, walking on foot or riding in a vehicle, to shoot the kind of smooth footage that had previously only been possible using a dolly. Before long, it had revolutionized the way in which both movies and television were shot, and not always in the most obvious ways. When we think of the Steadicam, we’re likely to remember virtuoso extended takes like the Copacabana sequence in Goodfellas, but it can also be a valuable tool even when we aren’t supposed to notice it. As the legendary Robert Elswit said recently to the New York Times:
“To me, it’s not a specialty item,” he said. “It’s usually there all the time.” The results, he added, are sometimes “not even necessarily recognizable as a Steadicam shot. You just use it to get something done in a simple way.”
Like digital video, the Steadicam has had a leveling influence on the movies. Scenes that might have been too expensive, complicated, or time-consuming to set up in the conventional manner can be done on the fly, which has opened up possibilities both for innovative stylists and for filmmakers who are struggling to get their stories made at all.
Not surprisingly, there are skeptics. In On Directing Film, which I think is the best book on storytelling I’ve ever read, David Mamet argues that it’s a mistake to think of a movie as a documentary record of what the protagonist does, and he continues:
The Steadicam (a hand-held camera), like many another technological miracle, has done injury; it has injured American movies, because it makes it so easy to follow the protagonist around, one no longer has to think, “What is the shot?” or “Where should I put the camera?” One thinks, instead, “I can shoot the whole thing in the morning.”
This conflicts with Mamet’s approach to structuring a plot, which hinges on dividing each scene into individual beats that can be expressed in purely visual terms. It’s a method that emerges naturally from the discipline of selecting shots and cutting them together, and it’s the kind of hard work that we’re often tempted to avoid. As Mamet adds in a footnote: “The Steadicam is no more capable of aiding in the creation of a good movie than the computer is in the writing of a good novel—both are labor-saving devices, which simplify and so make more attractive the mindless aspects of creative endeavor.” The casual use of the Steadicam seduces directors into conceiving of the action in terms of “little plays,” rather than in fundamental narrative units, and it removes some of the necessity of disciplined thinking beforehand.
But it isn’t until toward the end of the book that Mamet delivers his most ringing condemnation of what the Steadicam represents:
“Wouldn’t it be nice,” one might say, “if we could get this hall here, really around the corner from that door there; or to get that door here to really be the door that opens on the staircase to that door there? So we could just movie the camera from one to the next?”
It took me a great deal of effort and still takes me a great deal and will continue to take me a great deal of effort to answer the question thusly: no, not only is it not important to have those objects literally contiguous; it is important to fight against this desire, because fighting it reinforces an understanding of the essential nature of film, which is that it is made of disparate shorts, cut together. It’s a door, it’s a hall, it’s a blah-blah. Put the camera “there” and photograph, as simply as possible, that object. If we don’t understand that we both can and must cut the shots together, we are sneakily falling victim to the mistaken theory of the Steadicam.
This might all sound grumpy and abstract, but it isn’t. Take Birdman. You might well love Birdman—plenty of viewers evidently did—but I think it provides a devastating confirmation of Mamet’s point. By playing as a single, seemingly continuous shot, it robs itself of the ability to tell the story with cuts, and it inadvertently serves as an advertisement of how most good movies come together in the editing room. It’s an audacious experiment that never needs to be tried again. And it wouldn’t exist at all if it weren’t for the Steadicam.
But the Steadicam can also be a thing of beauty. I don’t want to discourage its use by filmmakers for whom it means the difference between making a movie under budget and never making it at all, as long as they don’t forget to think hard about all of the constituent parts of the story. There’s also a place for the bravura long take, especially when it depends on our awareness of the unfaked passage of time, as in the opening of Touch of Evil—a long take, made without benefit of a Steadicam, that runs the risk of looking less astonishing today because technology has made this sort of thing so much easier. And there’s even room for the occasional long take that exists only to wow us. De Palma has a fantastic one in Raising Cain, which I watched again recently, that deserves to be ranked among the greats. At its best, it can make the filmmaker’s audacity inseparable from the emotional core of the scene, as David Thomson observes of Goodfellas: “The terrific, serpentine, Steadicam tracking shot by which Henry Hill and his girl enter the Copacabana by the back exit is not just his attempt to impress her but Scorsese’s urge to stagger us and himself with bravura cinema.” The best example of all is The Shining, with its tracking shots of Danny pedaling his Big Wheel down the deserted corridors of the Overlook. It’s showy, but it also expresses the movie’s basic horror, as Danny is inexorably drawn to the revelation of his father’s true nature. (And it’s worth noting that much of its effectiveness is due to the sound design, with the alternation of the wheels against the carpet and floor, which is one of those artistic insights that never grows dated.) The Steadicam is a tool like any other, which means that it can be misused. It can be wonderful, too. But it requires a steady hand behind the camera.
Note: Details are given below for the solution to today’s New York Times crossword.
A week ago, I subscribed to the New York Times crossword puzzle. I’m still not at a point where I can read the news for more than a few minutes without becoming consumed by rage, so I’ve been looking for something else to fill my spare time. Fortunately, I’ve got plenty of work to do, but there are always gaps, and you can only read The A.V. Club or even The Lisle Letters for so long. The crossword seemed like a pretty good idea, especially when I caught a deal on the price of an annual subscription—it’s just twenty bucks for the entire year. And it felt a bit like coming home. There was a brief period about a decade ago in which I loved doing crosswords: I could reliably finish a Monday puzzle in two to three minutes and a Saturday puzzle in under half an hour, and I even attended the American Crossword Puzzle Tournament in 2008, after they switched venues to a hotel within walking distance of my apartment in Brooklyn. At my peak, I was studying lists of the most common obscure words (ETUI, ASTA, and the rest), venturing into the world of cryptics and acrostics, and even constructing a few puzzles of my own, including a notoriously difficult one that was given out to the guests at my wedding. Eventually, I burned out, and since I don’t have much time for hobbies, I hadn’t gone back to it until a few days ago.
So how did it feel? Picking up a crossword puzzle again after so long is sort of like tuning into a soap opera that you haven’t watched since college: you’re amazed that they’ve kept cranking them out in the meantime, and astonished at how little has changed. All the stock clues and answers greeted me like old friends, and I note that the puzzle still leans heavily on such hoary fallback options as MAITAI, NEHI, and AFLAC. The only difference, really, is me—I’m rusty. I’m lucky if I finish a Monday puzzle in five minutes, let alone three, and I’ll often find an error or two after I’m done. Fortunately, I’m more conscious of my limits than I used to be. When I attended the tournament all those years ago, I arrived, as I’m sure many novices do, with the secret hope that maybe I’d surprise everyone and win the whole thing. It was a dream that lasted roughly halfway through my first puzzle, when I saw that the solvers around me were finishing before I’d even had a chance to read through a third of the clues. It’s a humbling experience. Crosswords, in their oddball way, are an objective test of skill, at least for the community of people who have spent an inordinate amount of time solving and thinking about them. If the same handful of names tend to end up in the winner’s circle, it’s because there’s minimal luck involved, at least when you average it out over seven puzzles.
And while I’m certainly not the first person to note this, a crossword embodies many of the tools, in miniature form, that we use to solve larger problems. When I first tackle a puzzle—like the one in today’s paper by Molly Young—I begin by scanning clues quickly, starting at 1-Across, until I find a way in. Here, it happened to be “Preceder of Barbara or Clara” (SANTA), just because it was the first obvious one I saw. It’s an arbitrary starting point that serves as the seed from which a unique route through the crossword unfolds. (No two solving paths are the same, although it would be interesting to track the processes of expert solvers and see if any patterns emerge.) I already had a hunch, after reading the clue “New push-up bra from Apple?”, that all of the theme answers would begin with the letter “I,” and fortunately, I was right. After getting ILIFT, the northwest corner was a piece of cake. I caught a lucky break with “British P.M. between Churchill and Macmillan,” because I’ve been watching Jeremy Northam play Anthony EDEN on The Crown. The rest unfolded organically, following the path of least resistance, until it finally encountered a few rough spots that didn’t succumb right away. Today, for me, these were the northeast and southeast corners. At that point, you just have to stare at the same few clues, cycling between them until something clicks, and after I realized that “It can help you get a leg up” was OTTOMAN, I was basically done.
In the end, I didn’t make any mistakes, and my solving time was well within my historical average. (I won’t say how long it took me, because sharing your crossword times is like telling somebody how much money you make: anyone who solved it more quickly than you did won’t care, and anyone who took longer will just get annoyed.) And the process is roughly analogous to my approach to tackling any creative problem. I look for the easiest way in, try for one good guess toward the beginning, follow the most intuitive route, seek out catalysts, fall back on experience and old tricks, and rely on luck for the rest. If this were a Friday or Saturday puzzle, it would also include a much larger component of brute force, wrong turns, and frustration. I don’t necessarily think that the result makes you more creative: it’s such a hermetic, closed universe, with its own rules, that it doesn’t open onto anything more. And the correlation between skill here and meaningful talent elsewhere is unreliable at best. But as a short-term, self-contained, single-serving reminder of those basic capabilities, they have real value. They aren’t the only puzzles in life that you should try to solve, and when pursued too far, they can lead to a dead end—like any hobby. Still, at a time when so many dilemmas loom with no obvious solution, it’s consoling, maybe even sustaining, to spend time on puzzles that you know have an answer, in a world defined by black and white.