After the launch ChatGPT sparked the generative AI boom in Silicon Valley in late 2022, it was mere months before OpenAI turned to selling the software as an automation product for businesses. (It was first called Team, then Enterprise.) And it wasn’t long after that before it became clear that the jobs managers were likeliest to automate successfully weren’t the dull, dirty, and dangerous ones that futurists might have hoped: It was, largely, creative work that companies set their sights on. After all, enterprise clients soon realized that the output of most AI systems was too unreliable and too frequently incorrect to be counted on for jobs that demand accuracy. But creative work was another story.
As a result, some of the workers that have been most impacted by clients and bosses embracing AI have been in creative fields like art, graphic design, and illustration. Since the LLMs trained and sold by Silicon Valley companies have ingested countless illustrations, photos, and works of art (without the artists’ permission), AI products offered by Midjourney, OpenAI, and Anthropic can recreate images and designs tailored to a clients’ needs—at rates much cheaper than hiring a human artist. The work will necessarily not be original, and as of now it’s not legal to copyright AI-generated art, but in many contexts, a corporate client will deem it passable—especially for its non-public-facing needs.
This is why you’ll hear artists talk about the “good enough” principle. Creative workers aren’t typically worried that AI systems are so good they’ll be rendered obsolete as artists, or that AI-generated work will be better than theirs, but that clients, managers, and even consumers will deem AI art “good enough” as the companies that produce it push down their wages and corrode their ability to earn a living. (There is a clear parallel to the Luddites here, who were skilled technicians and clothmakers who weren’t worried about technology surpassing them, but the way factory owners used it to make cheaper, lower-quality goods that drove down prices.)
Sadly, this seems to be exactly what’s been happening, at least according to the available anecdata. I’ve received so many stories from artists about declining work offers, disappearing clients, and gigs drying up altogether, that it’s clear a change is afoot—and that many artists, illustrators, and graphic designers have seen their livelihoods impacted for the worse. And it’s not just wages. Corporate AI products are inflicting an assault on visual arts workers’ sense of identity and self-worth, as well as their material stability.
Not just that, but as with translators, the subject of the last installment of AI Killed My Job, there’s a widespread sense that AI companies are undermining a crucial pillar of what makes us human; our capacity to create and share art. Some of these stories, I will warn you, are very hard to read—to the extent that this is a content warning for descriptions of suicidal ideation—while others are absurd and darkly funny. All, I think, help us better understand how AI is impacting the arts and the visual arts industry. A sincere thanks to everyone who wrote in and shared their stories.
“I want AI to do my laundry and dishes so that I can do art and writing,” as the from SF author Joanna Maciejewska memorably put it, “not for AI to do my art and writing so that I can do my laundry and dishes.” These stories show what happens when it’s the other way around.
If AI can replace your ‘art’ you weren’t making art, you were making a comodified product. And you boss would have replaced you anyways as soon as another, cheaper, opportunity presented itself.
Not to be flippant, but as a copy editor and page designer for most of my career, we already went through this a decade ago without AI because we were deemed “doesn’t generate content.”
And frankly, I hate the term “content.” We were committing journalism, not posting to OnlyFans (at least, none of the people I worked with).
But my point is, I got all the “it can’t be that bad” and “bootstraps” bullshit that now other creatives are getting hit with. Accuracy was deemed too expensive more than 10 years ago. And trust me, editing is an art. You won’t get the same final copy and heds and layouts from two different copyeds at the same pub. It’s as much intuition as knowing the rules.
We were mocked (not necessarily by those finding themselves in the crosshairs now, but there’s a Venn diagram there that isn’t separate circles) for thinking we brought value to the table alongside the institutional gravitas.
Well, let’s see how trust in the media has gone over the past decade. Look, I’m not saying the desk disappearing is the sole cause of declining trust, as that would be absurd, but it sure as fuck didn’t help.
So, welcome to the club of “why pay you if we don’t have to?” It’s a fun ride. I was a graphic artist before things completely fell apart in print journalism and we became rectangle wranglers, a pair of hands implementing someone else’s decisions.
Y’all got an extra decade, having seen the decimation of print design and were like, “Well, that won’t happen to me.” And here we are, shocked Pikachu face and all.
First they came for …
After all, enterprise clients soon realized that the output of most AI systems was too unreliable and too frequently incorrect to be counted on for jobs that demand accuracy. But creative work was another story.
I think that the current crop of systems is often good enough for a header illustration in a journal or something, but there are also a lot of things that it just can’t reasonably do well. Maintaining character cohesion across multiple images, for example, and different perspectives — try doing a graphic novel with diffusion models trained on 2D images, and it just doesn’t work. The whole system would need to have a 3D model of the world, be able to do computer vision to get from 2D images to 3D, and have a knowledge of 3D stuff rather than 2D stuff. That’s something that humans, with a much deeper understanding of the world, find far easier.
Diffusion models have their own strong points where they’re a lot better than humans, like easily mimicking a artist’s style. I expect that as people bang away on things, it’ll become increasingly-visible what the low-hanging fruit is, and what is far harder.
(There is a clear parallel to the Luddites here, who were skilled technicians and clothmakers who weren’t worried about technology surpassing them, but the way factory owners used it to make cheaper, lower-quality goods that drove down prices.)
I think we need to start putting this on billboards.
Look, I’m not here to bust down any any looms, but I do seriously question the true capabilities of this service.
I remain hopeful that interest will drop in AI generated products because the quality just isn’t there. You can tell the voices aren’t right, the pictures are soulless, the prose is stilled and often self-contradictory. I think people will respond negatively to that.
But how long it will take for that to be clear to CEOs and CFOs, I don’t know, and lives are being destroyed in the meanwhile. I think AI is a good tool, but I don’t know how to keep it as a tool but prevent amateurs from thinking the output is professional level when any professional will tell you it isn’t.
I’ve generated a lot of text — mostly code and fiction. I’ve seen AI write some really good phrases I’d never have thought of. But I’ve never seen it generate so much as half a page before it writes something that requires editing. If you don’t write like 90% of a thing, its voice takes over and everything sounds terrible and flat even if you keep it from making factual errors.
And even the great bits require context or it won’t have any impact. AI is god awful at artistry. Sometimes I’ll ask it to analyze something I’ve written and it always wants to rewrite the bits that have style or panache and replace them with the most generic crap. I’m a terrible visual artist but I’m going to assume it does the same thing with image generation.
But I’ve never seen it generate so much as half a page before it writes something that requires editing.
Most human writers require editing well before five column inches. Not trying to give a pass to LLMs, but humans don’t produce perfect output, either.
And there’s an old saw: “Everyone needs an editor … especially editors.” That’s why creative work is collaborative.
And why sometimes when a writer becomes immensely successful the quality of their output suffers - they become “too big to edit.”
The Star Wars prequel trilogy is a case in point, IMO. Back on the original trilogy George Lucas had people who could tell him “no, that’s a bad idea.”
Yeah the problem is pretty much everything AI does requires collaboration with an actual human expert. But we’ve got people who think it can be a therapist without an actual therapist, artist without collaborating with an artist, coder, author, marketing strategist, lawyer, doctor…
This isn’t me belittling AI, I think it can do some really incredible things, but the way I see it, everything it does requires actual cognitive ability and domain knowledge.
I’ve used ChatGPT for work, just asking it to paraphrase original sources so I’m blinded from the original wording ahead of doing my rewrite. One paragraph at a time, it works great (I check against the source); feed it a full story at once, and holy shit, do you not have anything reliable – my favourite has to be that the Manhattan Project was created for Three Mile Island. At that point, you’re spending more time checking and verifying than you saved by using it in the first place.
Just a whole bunch of people gotta suffer until it crash and burns.
You can tell the voices aren’t right, the pictures are soulless, the prose is stilled and often self-contradictory.
And you can’t tell when the voices do turn out right, the pictures are fine, and the prose works well.
This all reminds me a lot of how people railed against CGI in movies, claiming that CGI scenes or actors would always look “uncanny valley” and that they’d always be able to tell. Many people continue to claim that to this day, unaware of just how much CGI is in each frame that they don’t recognize as CGI. Or worse, they look really hard for things to complain are bad CGI and end up accusing non-CGI shots of being CGI.
Not sure who you’re used to dealing with, but I use AI all the time — damn near every day — and have done for 6 years. I’ve written a discord AI dungeon master. I’ve written hundreds perhaps over a thousand short stories often starting from a scenario I’ve written and watched them all play out time and again. I know LLMs inside and out. I’ve jailbroken them to see how far they can be pushed in terms of violence, evil, and intimacy.
I’m no professional author to be sure, but because I lack the knack for storytelling, not because I don’t understand the craft. So I understand the tools pretty well, and I can tell when they are poorly employed.
And I’m irritated because I 100% can tell, and I wish you were right.
“I want AI to do my laundry and dishes so that I can do art and writing,”
And screw those people who make a living washing dishes in restaurants or doing maid service in hotels, their jobs aren’t special like mine are.
This headline could be so easily flipped on its head; “Clients rejoice as custom art becomes cheaper and more accessible for their projects.” But we’ve put artists on a pedestal for so long that such views are incredibly unpopular, and so those headlines don’t get the clicks and views like it get crushed out of social media.