What It Means To Write A Play In The Age Of AI


Ayad Akhtar’s brilliant new play, McNeal, currently at the Lincoln Center Theater, is transfixing in part because it tracks without flinching the disintegration of a celebrated writer, and in part because Akhtar goes to a place that few writers have visited so effectively—the very near future, in which large language models threaten to undo our self-satisfied understanding of creativity, plagiarism, and originality. And also because Robert Downey Jr., performing onstage for the first time in more than 40 years, perfectly embodies the genius and brokenness of the title character.

Explore the November 2024 Issue

Check out more from this issue and find your next story to read.

View More

I’ve been in conversation for quite some time with Akhtar, whose play Disgraced won the Pulitzer Prize in 2013, about artificial generative intelligence and its impact on cognition and creation. He’s one of the few writers I know whose position on AI can’t be reduced to the (understandable) plea For God’s sake, stop threatening my existence! In McNeal, he not only suggests that LLMs might be nondestructive utilities for human writers, but also deployed LLMs as he wrote (he’s used many of them, ChatGPT, Claude, and Gemini included). To my chagrin and astonishment, they seem to have helped him make an even better play. As you will see in our conversation, he doesn’t believe that this should be controversial.

In early September, Akhtar, Downey, Bartlett Sher—the Tony Award winner who directed McNeal—and I met at Downey’s home in New York for what turned out to be an amusing, occasionally frenetic, and sometimes even borderline profound discussion of the play, its origins, the flummoxing issues it raises, and, yes, Avengers: Age of Ultron. (Oppenheimer, for which Downey won an Academy Award, also came up.) We were joined intermittently by Susan Downey, Robert’s wife (and producing partner), and the person who believed that Akhtar’s play would tempt her husband to return to the stage. The conversation that follows is a condensed and edited version of our sprawling discussion, but I think it captures something about art and AI, and it certainly captures the exceptional qualities of three people, writer, director, and actor, who are operating at the pinnacle of their trade, without fear—perhaps without enough fear—of what is inescapably coming.


Jeffrey Goldberg: Did you write a play about a writer in the age of AI because you’re trying to figure out what your future might be?

Ayad Akhtar: We’ve been living in a regime of automated cognition, digital cognition, for a decade and a half. With AI, we’re now seeing a late downstream effect of that, and we think it’s something new, but it’s not. Technology has been transforming us now for quite some time. It’s transforming our neurochemistry. It’s transforming our societies, you know, and it’s making our emotionality within the social space different as well. It’s making us less capable of being bored, less willing to be bored, more willing to be distracted, less interested in reading.

In the midst of all this, what does it mean to be a writer trying to write in the way that I want to write? What would the new technologies mean for writers like Saul Bellow or Philip Roth, who I adore, and for the richness of their language?

Goldberg: Both of them inform the character of McNeal.

Akhtar: There are many writers inside McNeal—older writers of a certain generation whose work speaks to what is eternal in us as humans, but who maybe don’t speak as much to what is changing around us. I was actually thinking of Wallace Stevens in the age of AI at some point—“The Auroras of Autumn.” That poem is about Stevens eyeing the end of his life by the dazzling, otherworldly light of the northern lights. It’s a poem of extraordinary beauty. In this play, that dazzling display of natural wonder is actually AI. It’s no longer the sublime of nature.

Goldberg: Were you picturing Robert as you wrote this character?

Akhtar: I write to an ideal; it’s not necessarily a person.

Robert Downey Jr.: I feel that me and ideal are synonymous.

Akhtar: Robert’s embodiment of McNeal is in some ways much richer than what I wrote.

Downey: I have a really heavy, heavy aller­gy to paper. I’m allergic to things written on paper.

Akhtar: As I’ve discovered!

Downey: But the writing was transcendent. The last time that happened, I was reading Oppenheimer.

Goldberg: There’s Oppenheimer in this, but there’s also Age of Ultron, right?

Downey: Actually, I was thinking about that while I was reading this. And I’ll catch you guys up in the aggregate. I’m only ever doing two things: Either I’m trying to avoid threats or I’m seeking opportunities. This one is the latter. And I was thinking, Why would I be reading this? Because, I mean, I’ve been a bit of an oddball, and I was thinking, Why is this happening to me; why is this play with me? And I’m having this reaction, and it took me right back to Paul Bettany.

So that you guys understand what’s going on, this is the second Avengers film, Age of Ultron, and Bettany was playing this AI, my personal butler. The butler had gone through these iterations, and [the writer and director] Joss Whedon decided, “Let’s have you become a sentient being, a sentient being that is created from AI.” So first Bettany is the voice, and then he became this purple creature. And then there was this day when Bettany had to do a kind of soliloquy that Joss had written for him, as we are all introduced to him, wondering, Is he a threat? Can we trust him? Is he going to destroy us? And there comes this moment when we realize that he’s just seeking to understand, and be understood. And this was the moment in the middle of this genre film when we all stopped and thought, Wait, I think we might actually be talking about something important.

Goldberg: Bart, what are you exploring here?

Bartlett Sher: I’m basically exploring the deep tragedy of the life of Jacob McNeal. That’s the central issue. AI and everything around it, these are delivery systems to that exploration.

Akhtar: Robert has this wonderful moment in the play, the way he does it, in which he’s arguing for art in this very complicated conversation with a former lover. And it gets to one of the essences of the play, which is that this is an attempt to defend art even if it’s made by an indefensible person. Because in the end, human creation is still superior, and none of us is perfect. So the larger conversation around who gets to write, the morality of writing, all of that? In a way, it’s kind of emerging from that.

Goldberg: I can’t say for sure, but I think this is the first play that’s simultaneously about AI and #MeToo.

Downey: And identity and intergenerational conflict and cancel culture and misunderstanding and sub­intentional contempt and unconscious bias.

Goldberg: Are there any third rails you don’t touch?

Akhtar: McNeal is the third rail. He’s a vision of the artist in oppo­sition to society. Not a flatterer of the current values, but someone who questions them: “That’s a lie. That’s not true.”

Goldberg: The timing is excellent.

Downey: In movies, you always miss the moment, or you are preempted by something. With Oppenheimer, we happened to be coming out right around the time of certain other world events, but we couldn’t have known. With this, we are literally first to market. Theater is the shortest distance between two points. You have something urgent to say, and you don’t dawdle, and you have a space like Lincoln Center that is not interested in the bottom line, but interested in the form. And you have Ayad inspiring Bart, and then you get me, the bronze medalist. But I’m super fucking motivated, because I never get this sense of immediacy and emergence happening in real time.

Goldberg: Let’s talk for a minute about the AI creative apocalypse, or if it’s a creative apocalypse at all. I prompted Claude to write a play just like McNeal, with the same plot turns and characters as your play, and I asked it to write it in your style. What emerged was a play called The Plagiarist’s Lament. I went back and forth with Claude for a while, mainly to try to get something less hackish. But in the end, I failed. What came out was something like an Ayad play, except it was bad, not good.

Akhtar: But here’s the thing. You’re just using an off-the-shelf product, not leading-­edge story technology that is now becoming increasingly common in certain circles.

Goldberg: So don’t worry about today, but tomorrow?

Akhtar: The technology’s moving quickly, so it’s a reality. And worrying? I’m not trying to predict the future. And I’m also certainly not making a claim about whether it’s good or bad. I just want to understand it, because it’s coming.

Downey: To borrow from recent experience, I think we may be at a post-Trinity, pre-Hiroshima, pre-Nagasaki moment, though some people would say that we’re just at Hiroshima.

Goldberg: Hiroshima being the first real-world use of ChatGPT?

Downey: Trinity showed us that the bomb was purpose-built, and Hiroshima was showing us that the purpose was, possibly, not entirely necessary, but that it also didn’t matter, because, historically, it had already happened.

Goldberg: Right now, I’m assuming that part of the problem I had with the LLM was that I was giving it bad prompts.

Downey: One issue is that LLMs don’t get bored. We’ll be running something and Bart will go, “I’ve seen this before. I’ve done this before.” And then he says, “How can I make this new?”

The people who move culture forward are usually the high-ADD folks that we’ve tended to think either need to be medicated or all go into one line of work. They have a low threshold for boredom. And because they have this low threshold, they say, “I don’t want to do this. Do something different.” And it’s almost just to keep themselves awake. But what a great gift for creativity.

Goldberg: The three of you represent the acting side, and directing, and writing. Who’s in the most existential danger here from AI?

Downey: Anyone but me.

Akhtar: The Screen Actors Guild has dealt with the image-likeness issue in a meaning­ful way.

Downey: We’ve made the most noise—­we, SAG—­and we’re the most dramatic about everything. I remember when I was doing Chaplin, the talk was about how significant the end of the silent era was.

Goldberg: Is this the same level of disruption?

Downey: I doubt it, but not because Claude can’t currently pin his ass with both hands. There are versions that are going to be significantly more advanced. But technologies that people have argued would impede art and culture have often assisted and enhanced. So is this time different? That’s what we’re always worrying about. I live in California, always wondering, Is that little rumble in the kitchen, is this the big one?

Sher: For me, I think directing is very plastic. It requires integrating a lot of different levels of activity. So actually finding a way to process that into a computer’s thinking, and actually having it work in three dimensions in terms of organizing and developing, seems very difficult to me. And I essentially do the work of the interpreter and synthesizer.

A machine can tell you what to do, but it can’t interact and connect and pull together the different strands.

Akhtar: There’s a leadership dimension to what Bart does. I mean, you wouldn’t want a computer doing that.

Sher: This could sound geeky, but what is the distinguishing quality of making art? It is to participate in something uniquely human, something that can’t be done any other way.

So if the Greeks are gathering on the hillside because they are building a space where they can hear their stories and participate in them, that’s a uniquely human experience.

Akhtar: I do think that there is something irreducibly human about the theater, and that probably over time, it is going to continue to demonstrate its value in a world where virtuality is increasingly the norm. The economic problem for the theater has been that it happens only here and only now. So it’s always been hard to monetize.

Goldberg: But I have two words for you: ABBA Voyage. I mean, it’s an extraor­dinarily popular show that uses CGI and motion capture to give the experience of liveness without ABBA actually being there. Not precisely theater, but it is scalable, seemingly live technology.

Downey: Strangely, this is the real trifecta: IP, technology, and taste. I think of this brand of music—which, you know, it’s not my bag, but I still really admired that somebody was passionate about that and then purpose-built the venue. And then they said, “We’re not going to go for ‘Oh my God, that looks so real.’ We’re actually going to go for a more two-dimensional effect that is rendered in a way in which the audience can complete it themselves.”

Akhtar: ABBA Voyage is an exception. But it’s still not live theater.

Sher: It’s also not possible without the ABBA experience that preceded it. It’s an augmentation; it’s not original.

Goldberg: In terms of writing, Ayad, I did what you suggested I do and asked Claude to critique its own writing, and it was actually pretty good at that. I felt like I was actually talking with someone. We were in a dialogue about pacing, clarity, word choice.

Sher: But it has no intuition at all, no intuition for Ayad’s mindset in the middle of this activity, and no understanding of how he’s seeing it.

Downey: It does have context, and context is critical. I think it’s going to start quickly modeling all of those things that we hold dear as ­subtleties that are un­assailable. It’s going to see what’s missing in its sequence, and it’s going to focus all of its cloud-bursting energy on that.

Goldberg: It might be the producers or the studios who are in trouble, because the notes are delivered sequentially, logically, and without defensiveness. Do you think that these technologies can give better notes than the average executive?

Akhtar: I know producers in Hollywood who are already using these tools for their writers. And they’re using them empirically, saying, “This is what I think. Let’s see what the AI thinks.” And it turns out that the AI is actually pretty good at understanding certain forms. If you’ve got a corpus of texts—like, say, Law & Order ; you’ve got many, many seasons of that, or you’ve got many seasons of a children’s show—those are codified forms. And the AI, if it has all those texts, can understand how words are shaped in that form.

Goldberg: So you could upload a thousand Law & Order scripts and Claude could come up with the thousandth and first.

Akhtar: About a year and a half ago, when I started playing with ChatGPT, the first thing that I started to see were processes of language that reminded me of reading Shakespeare. No writer is better at presenting context than Shakespeare. What I mean by that is Shakespeare sets everything quickly in motion. It’s almost like a chess game—you’ve got pieces, and you want to get them out as quickly as possible so you have options. Shakespeare sets the options out quickly and starts creating variations. So there is a series of words or linguistic tropes for every single play, every poem cycle, every sonnet. They all have their universe of linguistic context that is being deployed and redeployed and redeployed. And it is in that play of language that you find an accretion of meaning. It was not quite as thrilling to see the chatbot do it, but it was actually very interesting to recognize the same process.

OK McCausland for The Atlantic

Goldberg: Shakespeare was his own AI.

Downey: Because he performed as a younger man, it was all uploaded into Shakespeare’s system. So he was so familiar with the template, and he had all this experience. And similarly, all of these LLMs are in this stage where they are just beginning to be taken seriously. It’s like we’re pre–bar mitzvah, but these are sharp kids.

Goldberg: Would you use ChatGPT to write an entire piece?

Sher: Soon we’ll be having conversations about whether Claude is a better artist than ChatGPT. Could you imagine people saying, “Well, I’m not going to see that play, because it was written by this machine; I want to see this one, because it’s written by Gemini instead.”

Goldberg: Unfortunately, I can easily imagine it.

Akhtar: I’m not sure that I would use an LLM to write a play, because they’re just not very good at doing that yet, as you discovered in your own play by Claude. I don’t think they’re good enough to be making the kinds of decisions that go into making a work of art.

Goldberg: But you’re teaching the tool how to get better.

Akhtar: So what? They’ve already gone to school on my body of work.

Goldberg: So what? So what? Six hundred years of Gutenberg, and the printing press never made decisions on its own.

Akhtar: But we’re already within this regime where power and monetized scale exist within the hands of very few. We’re doing it every day with our phones; you’re teaching the machine everything about you and your family and your desires. This is the paradigm for the 21st century. All human activity is passing through the hands of very few people and a lot of machines.

Goldberg: McNeal is about lack of control.

Akhtar: It is. I’m just making the point that we’re not really in a different regime of power with AI. It may be even more concentrated and even more consequential, but at the end of the day, to participate in the public space in the 21st century is to participate in this structure. That’s just what it is. We don’t have an alternative, because our government has not regulated this.

Goldberg: You see the LLM as a collaborator in some ways. Where will the red line be for writers, between collaboration and plagiarism?

Akhtar: From my perspective, there are any number of artists we could look at, but the one that I would probably always spend the most time looking at is Shakespeare, and it’s tough to say that he wasn’t copying. As McNeal explains at one point in the play, King Lear shares 70 percent of its words with a previous play called King Leir, which Shakespeare knew well and used to write Lear. And it’s not just Leir. There’s that great scene in Lear where Gloucester is led to this plain and told it’s a cliff over which he’s going to jump, and that subplot is taken right out of Sir Philip Sidney. It may reflect deeper processes of cognition. It may reflect, as Bart has said, how we imitate in order to learn. All of that is just part of what we do. When that gets married to a corporate-ownership model, that is a separate issue, something that will have to get worked out over time, social­ly and legally. Or not, if our legislators don’t have the will to do so.

Goldberg: The final soliloquy of the play—no spoilers here—is augmented by AI.

Akhtar: This has really been a fascinating collaboration. Because I wanted some part of the play to actually be meaningfully generated by ChatGPT or some large language model—Gemini, Claude. I tried them all. And I wanted to do it because it was part of what the play was about. But the LLMs had a tough time actually delivering the goods until this week. I’ve finally had some experiences now, after many months of working with them, that are bearing fruit.

I wanted the final speech to have a quality of magic to it that resembles the kind of amazement that I knew you had felt working with the model, and that I have sometimes felt when I see the language being generated. I want the audience to have that experience.

Sher: You know, I think the problem you were facing could have been with any of your collaborators. We just had this new collaborator to help with that moment.

Goldberg: You’re blowing my mind.

Akhtar: It’s not really that controversial.

Goldberg: Yes it is. It’s totally controversial.

Downey: Well, let’s find out!

Goldberg: It’s more of a leap than you guys think.

Akhtar: It’s a play about AI. It stands to reason that I was able, over the course of many months, to finally get the AI to give me something that I could use in the play.

Downey: You know what the leap was like? A colicky little baby finally gave us a big ol’ burp.

Akhtar: That’s exactly right. That’s what happened. A lot of unsatisfying work, and then, unprompted, it finally came up with a brilliant final couplet! And that’s what I’m using for the end of the play’s final speech.

Goldberg: Amazing, and threatening.

Sher: I just can’t imagine a world in which ChatGPT could take all experience and unify it with Ayad’s interest in beauty and meaning and his obsession with classical tragedy and pull all those forces together with emotion and feeling. Because no matter how many times you prompted it, you’re still going to get The Pestilential Plagiarist, or whatever it’s called.

Downey: The reason that we’re all sitting here right now is because this motherfucker, Ayad, is so searingly sophisticated, but also on occasion—more than occasionally—hot under the collar. My new favorite cable channel is called Ayad Has Fucking Had It. He’s like the most collaborative superintelligence you will ever come across, and therefore he’s letting all this slack out to everyone around him, but once in a while, if this intelligence is entirely unappreciated for hours or days at a time, he will flare. He’ll just remind us that he can break the sound barrier if he wants to. And I get chills from that. And that’s why we’re here. It’s the human thing.

Akhtar: It’s not new for humans to use tools.

Sher: Are we going to be required to upload a system of ethics into the machines as they get more and more powerful?

Downey: Too late.

Goldberg: That’s what they promise in Silicon Valley, alignment with human values.

Downey: Two years ago was the time to do something.

Akhtar: You guys are thinking big. But I just don’t know how this is going to play out. I don’t know what it is. I’m just interested in what I’m experiencing now and in working with the technology. What’s the experience I’m having now?

Goldberg: There’s a difference between a human hack and an excellent human writer. The human hack doesn’t know that they’re bad.

Downey: This is a harebrained rabbit hole where we could constantly keep thinking of more and more ramifications. Another issue here is that certain great artists do something that most people would labor an entire life or career to come close to, and the second they’re done with it, they have contempt for it, because they go, “Eh, that’s not my best.”

Akhtar: I recognize someone in that.

Downey: All I’m saying is that I just want the feeling of those sparks flying, that new neural pathway being forced. I want to push the limits. It’s that whole thing of pushing limits. When I feel good, when I can tell Bart is kicking me, when Ayad is just lighting up, and when I’m realizing that I just got a note that revolutionized the way I’m going to try to portray something, you go, “Ooh!” And even if it’s old news to someone else, for me, it’s revolutionary.

Akhtar: Another way of putting this, what Robert is saying, is that what he’s engaged in is not problem-solving, per se. It’s not that there’s an identified problem that he is trying to solve. This is how a computer is often thinking, with a gamification sort of mindset. For Robert, there’s a richness of the present for him as he’s working that is identifying possibilities, not problems.

Sher: I’ve thought a lot about this, trying to understand the issue of GPT and creativity, and I’m a lot less worried now, because I feel that the depth of the artistic process in the theater isn’t replicable.

The amalgam of human experience and emotion and feeling that passes through artists is uniquely human and not capturable. Word orders can be taken from all kinds of sources. They can be imitated; they can be replicated; they can be reproduced in different ways. But the essential activity of what we do here in this way, and what we build, has never been safer.

Downey: And if our job is to hold the mirror up to nature, this is now part of nature. It is now part of the firmament. Nature is now inclusive of this. We’re onstage and we’re reflecting this back to you. What do you see? Do you see yourself within this picture?


This article appears in the November 2024 print edition with the headline “The Playwright in the Age of AI.”



Source link

About The Author

Scroll to Top