The debate about AI art focuses on output quality. That misses the point entirely.

Most of the debate about AI-generated art is about whether it's any good. Whether the outputs are indistinguishable from human work. Whether a prompt engineer deserves to be called an artist. Whether the training data was ethically sourced.
These are the wrong questions.
They're wrong because they treat art as a product. Something you evaluate by looking at the finished object, comparing it to other finished objects, and deciding whether it passes muster. By that logic, a perfect replica of the Mona Lisa is equivalent to the original. A cover band is the same as the songwriter. A microwave dinner is the same as a home-cooked meal.
We know, intuitively, that this is wrong. But the AI art conversation keeps circling back to outputs, because outputs are easy to compare and process is invisible.
So let's talk about process.
Richard Sennett, in The Craftsman, argues that skill develops through a specific kind of repeated engagement with materials. The woodworker who has spent ten thousand hours at the lathe doesn't just produce better chairs. They have a different relationship with wood. They understand grain, tension, moisture, response. The material speaks to them. This isn't mysticism. It's embodied knowledge, the kind that lives in your hands and your peripheral vision and the part of your brain that fires before conscious thought kicks in.
A painter who spends 200 hours on a canvas is changed by those hours. Every stroke is a decision that feeds back: the paint resists or flows, the color shifts in ways you didn't predict, the composition reveals problems you couldn't see in the sketch. You adjust. You learn. The painting teaches you how to paint it, and in doing so, it teaches you something about seeing.
A prompt that takes 30 seconds teaches you how to write prompts.
That's it. That's the entire pedagogy.
The argument for AI art tools usually goes something like this: "The output looks just as good, and it took a fraction of the time. Why wouldn't you use the more efficient method?"
This logic makes perfect sense if you're running a content farm. If you need 500 product images by Thursday, sure, generate them. If you need a stock illustration for a blog post and your budget is zero, fine.
But applying assembly-line logic to human expression is a category error. Efficiency is a value that belongs to production. Art isn't production. Or rather, the part of art that matters isn't the production part.
Matthew Crawford makes a related argument in Shop Class as Soulcraft. He writes about the mechanic who diagnoses an engine problem by listening. Not by running a diagnostic scan, but by listening. The scan would be faster. The scan might even be more accurate. But the mechanic's way of knowing is fundamentally different. It's a form of attention, of being-in-relation-with the machine, that the scan eliminates entirely.
When we optimize for speed and output quality, we're implicitly saying that this relational dimension doesn't matter. That the only thing worth preserving is the artifact at the end.
Mihaly Csikszentmihalyi spent decades studying flow, the state of complete absorption in a challenging activity. His research, conducted across cultures and disciplines, found that flow states are among the most meaningful experiences humans report. And they share specific conditions: the task must be difficult enough to require full engagement, there must be clear feedback, and there must be a sense of agency.
Making art, when it's working, is a flow state. The hours disappear. You're solving problems in real time, problems that the medium keeps generating. The clay cracks. The chord progression doesn't resolve the way you expected. The paragraph needs a sentence that hasn't been written yet, and you can feel its shape but not its words.
Prompting an AI model is not a flow state. It's a transaction. You describe what you want, you get a result, you refine your description. The feedback loop exists, but it's impoverished; you're negotiating with a black box, not engaging with a material reality that pushes back in ways you can learn from.
This isn't snobbery. It's a description of two fundamentally different cognitive experiences. One builds something in the maker. The other just produces a deliverable.
Here's what I think the real long-term consequence looks like. Not a world without art (there will always be people who paint and sculpt and compose because the process itself sustains them) but a world with far fewer people who can.
If you can generate a passable illustration in thirty seconds, why would a young artist spend years learning to draw? If you can produce a film score with a prompt, why would a student grind through music theory and ear training and the humbling experience of playing badly in front of people?
The answer, for some people, will be: they wouldn't. And we'll lose something that isn't visible in any individual image or song but is visible across a culture: a population of people who know what it means to struggle with a medium and come out the other side with a skill they didn't have before.
The skills aren't just instrumental. They're constitutive. Learning to draw changes how you see. Learning music changes how you hear. Learning to write, really write, through the painful process of drafting and revising and confronting your own unclear thinking, changes how you think. These are not interchangeable with having an AI do it for you, any more than having someone else exercise is interchangeable with exercising yourself.
There's a historical parallel worth considering. When photography was invented, painters panicked. The panic was partly justified; portrait painting as a trade collapsed. But painting didn't die. It went somewhere else. Freed from the obligation to represent reality accurately, painters explored abstraction, expressionism, all the movements that made twentieth-century art what it was.
AI art advocates love this analogy. "See? New tools just push art in new directions."
But the analogy breaks down in a specific way. Photography didn't replace the process of making visual art. It replaced one application of that process. Photographers still needed skill, vision, timing, an understanding of light and composition. The camera was a tool that extended human capability without eliminating human agency.
AI image generation replaces the process itself. The human contribution is reduced to description, telling the machine what you want. This is more analogous to being a patron than being an artist. The Medici didn't paint the Sistine Chapel. They told Michelangelo what they wanted and he figured out how to do it.
Prompt engineers are the new Medici. Which is fine, as far as it goes. Patronage is a legitimate role. But let's not call it artistry.
We're not losing art. The world will be flooded with more images, more music, more text than any previous era. By volume, the output of human culture is about to explode.
We're losing artists. People who have been transformed by the slow, frustrating, irreducibly human process of wrestling something into existence. People who can see things the rest of us can't see, because they've spent thousands of hours training their perception through the act of making.
That loss won't show up in any output metric. You can't measure it by comparing generated images to painted ones. It lives in the gap between a person who has made things and a person who has described things to a machine.
The artifact is not the art. The process is the art. And the process is what's dying.
Join my newsletter to get notified when I publish new articles on AI, technology, and philosophy. I share in-depth insights, practical tutorials, and thought-provoking ideas.
Technical tutorials and detailed guides
The latest in AI and tech
Get notified when I publish new articles. Unsubscribe anytime.