After the launch ChatGPT sparked the generative AI boom in Silicon Valley in late 2022, it was mere months before OpenAI turned to selling the software as an automation product for businesses. (It was first called Team, then Enterprise.) And it wasn’t long after that before it became clear that the jobs managers were likeliest to automate successfully weren’t the dull, dirty, and dangerous ones that futurists might have hoped: It was, largely, creative work that companies set their sights on. After all, enterprise clients soon realized that the output of most AI systems was too unreliable and too frequently incorrect to be counted on for jobs that demand accuracy. But creative work was another story.

As a result, some of the workers that have been most impacted by clients and bosses embracing AI have been in creative fields like art, graphic design, and illustration. Since the LLMs trained and sold by Silicon Valley companies have ingested countless illustrations, photos, and works of art (without the artists’ permission), AI products offered by Midjourney, OpenAI, and Anthropic can recreate images and designs tailored to a clients’ needs—at rates much cheaper than hiring a human artist. The work will necessarily not be original, and as of now it’s not legal to copyright AI-generated art, but in many contexts, a corporate client will deem it passable—especially for its non-public-facing needs.

This is why you’ll hear artists talk about the “good enough” principle. Creative workers aren’t typically worried that AI systems are so good they’ll be rendered obsolete as artists, or that AI-generated work will be better than theirs, but that clients, managers, and even consumers will deem AI art “good enough” as the companies that produce it push down their wages and corrode their ability to earn a living. (There is a clear parallel to the Luddites here, who were skilled technicians and clothmakers who weren’t worried about technology surpassing them, but the way factory owners used it to make cheaper, lower-quality goods that drove down prices.)

Sadly, this seems to be exactly what’s been happening, at least according to the available anecdata. I’ve received so many stories from artists about declining work offers, disappearing clients, and gigs drying up altogether, that it’s clear a change is afoot—and that many artists, illustrators, and graphic designers have seen their livelihoods impacted for the worse. And it’s not just wages. Corporate AI products are inflicting an assault on visual arts workers’ sense of identity and self-worth, as well as their material stability.

Not just that, but as with translators, the subject of the last installment of AI Killed My Job, there’s a widespread sense that AI companies are undermining a crucial pillar of what makes us human; our capacity to create and share art. Some of these stories, I will warn you, are very hard to read—to the extent that this is a content warning for descriptions of suicidal ideation—while others are absurd and darkly funny. All, I think, help us better understand how AI is impacting the arts and the visual arts industry. A sincere thanks to everyone who wrote in and shared their stories.

“I want AI to do my laundry and dishes so that I can do art and writing,” as the from SF author Joanna Maciejewska memorably put it, “not for AI to do my art and writing so that I can do my laundry and dishes.” These stories show what happens when it’s the other way around.

  • tal@olio.cafe
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 hours ago

    After all, enterprise clients soon realized that the output of most AI systems was too unreliable and too frequently incorrect to be counted on for jobs that demand accuracy. But creative work was another story.

    I think that the current crop of systems is often good enough for a header illustration in a journal or something, but there are also a lot of things that it just can’t reasonably do well. Maintaining character cohesion across multiple images, for example, and different perspectives — try doing a graphic novel with diffusion models trained on 2D images, and it just doesn’t work. The whole system would need to have a 3D model of the world, be able to do computer vision to get from 2D images to 3D, and have a knowledge of 3D stuff rather than 2D stuff. That’s something that humans, with a much deeper understanding of the world, find far easier.

    Diffusion models have their own strong points where they’re a lot better than humans, like easily mimicking a artist’s style. I expect that as people bang away on things, it’ll become increasingly-visible what the low-hanging fruit is, and what is far harder.