CGI, Child Actors, and Use Cases for AI
A Conversation with Aimee Walleston
Last winter, I joined artist and dear friend Alexey Yurenev and Dean Emeritus of the International Center of Photography Fred Ritchin on a panel at the school to discuss the future of photography and its uneasy relationship with the newest suite of emerging generative AI tools. In the audience that night was Aimee Walleston. She reached out shortly after and we began meeting regularly over drinks, at lectures, and exhibitions to discuss the state of “things”. From the start, I recognized in her what I cherish most in people: a willingness to go deep and be destinationless in conversation. It was at our first meeting at Bar Laika that I brought up the idea of risk. Those early exchanges with Aimee, among some of my other peers who I’ll feature here, seeded the vision for this Substack.
Aimee brings her full brain and heart to her work as a critic, essayist, and educator. Her writing has appeared in Art in America, Real Life, and The Brooklyn Rail, and is a contributing editor at Ocula. She teaches at the International Center of Photography and Sotheby’s Institute.
In our conversation, she reflects on the glut of internet writing, the challenges of teaching in the age of AI, and the value of thinking slow and writing fast.
D: We can talk about this idea of risk in a bunch of different ways. Be it, the ethical, economic, political ramifications of vision technologies or AI. But I was thinking about it more like what’s at stake in someone's practice to make good work, and what does that involve? The answer is going to be different for each of us, of course. As a writer, critic and educator, what's at stake for you?
A: I find that my take on these things leans toward being curious first. I think people expect me to be horrified that people use ChatGPT for their writing. Not all writing requires a brain though. For my creative writing students, however, I don’t know why you would want to cheat on an assignment. I don't know why they would think so little of their own creativity or their own intellect. With art, it’s a completely different thing for me. I love how AI can be used in weird, provocative ways. For instance the director of The Brutalist, Brady Corbet, used AI to augment the Hungarian accent of the American actors in the movie. As well as some of the shots of architecture and to me, we’re already ok with CGI in movies so this seems like a reasonable use case.
D: Right. I suspect there’s a conflation of issues here, and that's reasonable that there would be, because it's a compound problem. The issue of creativity, or whether AI is creative, is wrapped up in this idea of labor and value, which is something that I find very fascinating if we're talking about more “commercial” arts, like CGI. Ultimately, what a lot of these generative tools will end up doing once we can figure out how to fine tune them, is replacing older, more timely and costly workflows like those in the visual effects industry. Of course, that's frustrating for CGI artists, especially if actual humans are cut out of the process in service of the bottom line.
You mentioned approaching new tools with a sense of curiosity, though, and that's something that I come back to as an artist. I feel like it’s actually my responsibility as a creative person to be curious about tools because as someone who mostly works with photography, the photographic apparatus is always changing; therefore it feels like my responsibility to always be looking at and engaging with a medium that continues to change. That is not to say that I advocate for those changes, or tout them as being unproblematic.
A: Yes, well, I met you because you were doing the talk at The International Center of Photography around AI with Alexey Yurenev and Fred Ritchin, and I think that particularly Fred has, as a long time expert in documentary photography, a hard line stance of AI being ruinous to the concept of the documentary practices because the photograph needs to be document of truth. But I, as somebody that is both in that world as an educator, but also beyond that world, because I'm more in the art world, I would say I liked your perspective, I liked your work, and sought out a friendship with you, because I thought that your perspective was more interesting to me. I’m more sympathetic to the idea of using these tools as tools versus demonizing them as some kind of, like, moral failing on the part of both the tool itself and the humans that use it or interact with it.
With that said, I think that AI is going to take away some jobs. I mean, as a writer, there is a certain kind of rote work that has dried up. And I believe it’s 100% as a result of AI, but I don't think you can make a career out of those jobs anyway. I don't say this to diminish if somebody does make their career out of doing that kind of writing, be it copy, grant writing, etc. I don't want to diminish the effect that work drying up has, but also tools change all the time. People say that we're in the Fourth Industrial Age. Basically, we're still in an age of industry, and industries change all the time.
AI has no control over us unless we let it have some control over us. That might seem like a very hubristic stance, but I really feel that way. I don't understand people getting so upset over a tool that is a human made tool, and I personally would advocate for people to, instead of taking the stance of being afraid of this thing, but instead say, how could I learn to make friends with it? In Finland, I believe, they’re offering AI courses that are open to the entire country as part of a broader literacy initiative.
D: You’re hitting on something interesting regarding people who see value in change and those who are categorically afraid of change. Both stances have the potential to be problematic, but for different reasons. If you’re part of a working class that feels like you have no agency within a changing system, that you’re always striving to stay ahead of the curve technologically, that can be exhausting and frustrating. Especially, in this particular climate in the United States because there’s very little protection for workers. Generally speaking, the working class must sell their labor to survive but the economy is designed to devalue what they do, their skills and services, which puts people in a very compromising position long term.
There’s this book that I reference a lot lately called, The Promise of Access: Technology, Inequality, and the Political Economy of Hope by Daniel Greene where Greene argues that poverty continues to be framed as technology problem, where by if the underprivileged just had access to the right tools/skills that alone could resolve the issue of poverty instead of looking at the superstructures of power and capital.
I think it would be useful to develop a regulatory body that considers how this is going to affect education or industry writ large and develop protections for citizens. I don't have a lot of hope when it comes to the regulation required to protect people in the U.S. right now, but maybe I'll be surprised.
Harvard’s AI Pedagogy Project is an initiative that works with scholars and experts to create teaching frameworks for educators to bring into the classrooms. So there are resources for us, as individuals, to tackle the literacy issue. Unfortunately, when it comes to public schools, teachers have to rush to implement change, often with little institutional support, so I can imagine their frustration having to go above and beyond what they already do to prepare their students.
A: I grew up in a working-class mill town that lost its textile industry, and while many people were left behind, my mom managed to pivot into the early digital economy as an account executive at a computer company, despite not having a college degree. Seeing her succeed in that world while others were crushed by the collapse taught me the importance of grit and adaptability. That’s why I believe we need to integrate AI into education, so kids learn in structured, meaningful ways instead of relying only on social media or to fend for themselves.
D: This reminds me of what Kyla Scanlon, the economic commentator, was talking about on the Ezra Klein show recently. She mentioned how apps make life frictionless for people, whether it be DoorDash and Uber and we're, as a society, getting comfortable with a certain level of day-to-day ease. What does this mean for us when we're accustomed to a frictionless life?
I'm of two minds: one is, yes, new tools are good. And there's work that I want to offload to my tools. I’m horrible at math, for instance. I don't do math longhand. I use a calculator (or let’s be honest, my phone). Similarly, there are things that facilitate my creative shorthand. With ChatGPT I use it in the idea generation phase. I need someone, or more precisely something, to conversationally walk through a thought I’m having. I'm not expecting ChatGPT to solve the problem for me. I actually think it's wrong half the time. Right now, it still feels like a blunt instrument for my purposes. But I like arguing with it as a way of preferring something to come up against. It’s the act of disagreeing that I find generative. I'm not asking it solve my problems. I'm asking for it to be present with me, as a kind of resistance by way of how it misses the mark, as I solve them.
To go back to what Kyla was saying, and what I present in many of my artist talks, is that friction is the point of being an artist. I don't want someone to take that work away from me. And so yes, I use AI tools in a variety of ways, but it always is about folding it back into the thing that I love doing. The act of struggling through the process is precisely where knowledge and creativity stems from.
A: I love all of that. I think that’s something that people crave, because they crave mastery over difficult things. How did you think you got to this place with it? Because, although you recognize the problems, you do seem to have come to a very integrated and aligned perspective about this.
D: I’ve been complaining about digital tools for most of my adult life. Working in commercial post-production, my relationship with technology has always been fraught. I lean into its limitations and try to break it as a way of forming my own opinions.
The conversation about AI’s influence on creativity is a bit of a red herring because it’s the easiest thing to argue about and overshadows more complex and problematic aspects of the technology. The breakthroughs happening in medicine or science for instance. The other day I went to a talk about AI being used to decode whale codas, stuff like this feels far more enriching than endless debates about whether AI makes “real” art.
A: Yeah, I agree. I’ve always had a wide lens on the world, wanting not just to learn from it, but to reframe it in new and innovative ways. That’s the real charm of art, and I try to carry that into my writing, editing, and teaching. But the past decade has been deeply polarized, and it feels like if you don’t take a side—on AI, politics, whatever—you’re accused of letting something bad happen. That mentality flattens complexity into binaries.
For example, one of my odd “bugaboos” is child actors. We ban kids from factory work, but it’s somehow fine to exploit them on movie sets. If AI could realistically generate child characters, maybe we wouldn’t need to put actual children in those situations. To me, that’s an interesting ethical use case—one of many. We’re living in a pluralistic, postmodern moment, and the upside is that multiple perspectives should be part of the conversation. When everything collapses into black-and-white thinking, though, it feels less like postmodernism and more like tribalism.
That’s why I emphasize to my students that something can be both good and bad at once. Every year they fixate on a new “problem”, right now it’s AI in my documentary photography classes. They worry it will erase jobs or trick people. And sure, I once got fooled for a second by that AI kangaroo boarding-pass video, but I wasn’t devastated. Instead, I thought: what does it say about how we engage with images, truth, and belief? The deeper issue isn’t whether an image is “real,” but how we read it, how we bring associations to it, and how our literacy needs to evolve.

That’s why I’ve been toying with developing an image literacy program. Because the real opportunity isn’t in panicking about deception, it’s in learning from these moments, asking why we react as we do, and holding space for multiplicity instead of hunting for a single right or wrong answer.
D: For me, holding multiple viewpoints is crucial. It’s overwhelming and uncomfortable, but also necessary. You can be terrified that AI might take your job and still see and acknowledge the problems that it is solving. Grappling with both sides at once is where the deeper understanding happens. That’s why I’m so drawn to teaching visual literacy and even courses on the history of photo manipulation, which has been around since photography’s invention.
Understanding how images deceive us is important because photographs have long been mistaken for pure truth when, in fact, they’re mediated by choices: the photographer’s perspective, the lens, lighting, the framing. Technology amplifies those manipulations, but it isn’t the root cause. The real question is how humans use images for their own benefit.
A: When people panic that AI will “ruin the world”, I think they miss the point. The real problem is velocity: information now moves so fast people gobble it up without pausing to ask if it’s true. I’m naturally impatient and work quickly, yet I still notice how little patience most people have to sit with something and actually question it.
D: Right. All the contextual information leads us to understanding, but we need to take the time to sit it. Yesterday I was reading through your last Substack post. I, personally, appreciate your cadence and the discursive way of working through an issue on the page. How do you, as a writer and a critic, think about the speed in which you put stuff out into the world and how fast you generate your ideas, and is there some benefit to slowness?
A: That really means a lot, because most of the time I feel like I’m in my own little world with my dog, unsure how my writing lands. When something matters to me, I need time to digest it. Yes, I can turn around a thousand-word exhibition review in a week if needed, but when I’m writing on Substack, it’s different—there’s no editor, no paycheck, just me working through my own thoughts.
The Warlord essay, for example, was something I wrote entirely for myself before it was later picked up by Do Not Research. It was long, personal, and not designed to meet a weekly deadline. That’s why the constant push from platforms to churn out content on their schedule feels so off to me. I think it’s led to a glut of writing that, while interesting, often isn’t great—because it’s produced under pressure to “feed” subscribers rather than to really explore an idea.
So I publish only when I feel I have something worth saying, even if that means fewer posts and little chance of financial success. It’s not a recipe for Substack stardom, but it feels truer to me—and maybe that’s its own kind of success.





