Director Valerie Veatch went into the AI space the way a lot of artists did in 2024: curious, cautious, hopeful that the technology might open new doors. She'd seen OpenAI release Sora, its text-to-video model, and watched as other creators began building online communities around what the tool could generate. The promise of connection drew her in. What she found instead was a machine that couldn't stop producing racist and sexist imagery—and a community of AI enthusiasts who didn't seem to think that was a problem worth solving.
What disturbed Veatch more than the output, though, was the logic underneath it. As she told The Verge, the technology wasn't just replicating existing biases. It was built on a premise about whose labor deserves to exist—and whose can be rendered obsolete without consequence. That's not a bug in the system. It's the system working exactly as designed.
The conversation around AI and creativity has been frustratingly shallow. Most of it centers on whether the output looks good enough, whether it can fool a viewer, whether it saves time or money. What gets discussed far less is the ideological framework that makes generative AI appealing to the people funding it: the belief that creative labor is inefficient, that human inconsistency is a liability, and that the solution is a machine that can produce endless variations without complaint, without credit, and without compensation.
That's eugenics logic. Not metaphorically. The premise that certain kinds of work—and by extension, certain kinds of workers—are redundant and should be phased out in favor of more efficient systems is the same hierarchical thinking that underpinned early 20th-century eugenics movements. It's the belief that some contributions have value and others are waste. The language has softened. The structure hasn't.
Veatch's observation connects to a pattern that's been unfolding across the industry. Galleries are using AI for inventory management, not art, because even institutions that profit from creative work don't believe AI-generated output deserves wall space. Meanwhile, streaming platforms and studios are pouring money into AI tools that promise to reduce reliance on writers, directors, and animators—not because the technology produces better work, but because it produces cheaper work that doesn't require negotiation, residuals, or creative input.
The racism and sexism Veatch encountered in AI-generated images aren't separate from this economic agenda. They're symptoms of the same problem: a system trained on datasets scraped without consent, optimized for patterns that reflect existing power structures, and deployed by companies that see creative labor as a cost to minimize rather than a process to respect. When the machine generates images dripping with bias, it's because the bias is embedded in the training data, the corporate incentives, and the cultural assumptions about whose perspectives matter.
This isn't new. Tech utopianism has always masked hierarchies about whose work counts. The difference now is that the displacement is happening faster and with less resistance than previous waves of automation, in part because the rhetoric around AI frames it as democratization. The pitch is that anyone can be a creator now—you don't need training, experience, or even an idea. Just type a prompt and let the machine do the rest.
But that's not democratization. It's the industrialization of creativity, repackaged as access. What it actually does is devalue the labor of people who spent years developing skills, building practices, and contributing to cultural conversations. It tells them their work was never that special to begin with—that a machine trained on stolen datasets can approximate it in seconds. And it does so while funneling profits to the platforms and companies that own the models, not the artists whose work was used to train them.
The AI enthusiast community Veatch encountered isn't an outlier. It's the logical endpoint of a culture that has spent decades treating creative work as content to be optimized rather than labor to be compensated. The same forces that turned musicians into playlist filler, writers into SEO strategists, and filmmakers into algorithm-friendly content producers are now selling AI as the next step in that progression. The people who don't care about the racist output aren't ignoring a flaw—they're accepting the terms of a system that was never designed to value equity, only efficiency.
What makes Veatch's critique so necessary is that it refuses to separate the aesthetic problems from the structural ones. The conversation about AI can't just be about whether the images look good or whether the technology is impressive. It has to be about the economic and ideological premises that make this technology attractive to the people building and funding it. Europe's AI resolution drew a line on consent and attribution, but even that regulatory intervention doesn't address the deeper question: what kind of culture are we building when we treat human creativity as something to be automated away?
The answer, if we follow the current trajectory, is a culture that looks a lot like the one AI is already producing: shallow, derivative, biased, and optimized for profit rather than meaning. The problem isn't that the machine can generate images. The problem is that the people deploying it believe that's all creativity ever was—a series of patterns to be replicated, a cost to be cut, a process that never needed human beings in the first place.

Veatch went looking for community and found a machine that couldn't see her. That's the tell. The technology doesn't just reproduce the biases of its training data. It codifies a worldview in which some people's work matters and others' can be discarded without loss. That's not a technical problem. It's a moral one. And it won't be solved by better datasets or more diverse prompts. It requires confronting the premise that creative labor was ever disposable to begin with.