Before we start, let’s just get the basics out of the way - yes, stealing the work of hundreds of thousands if not millions of private artists without their knowledge or consent and using it to drive them out of business is wrong. Capitalism, as it turns out, is bad. Shocking news to all of you liberals, I’m sure, but it’s easy to call foul now because everything is wrong at once - the artists are losing their jobs, the slop being used to muscle them out is soulless and ugly, and the money is going to lazy, talentless hacks instead. With the recent implosion of the NFT space, we’re still actively witnessing the swan song of the previous art-adjacent grift, so it’s easy to be looking for problems (and there are many problems). But what if things were different?

Just to put my cards on the table, I’ve been pretty firmly against generative AI for a while, but I’m certainly not opposed to using AI or Machine Learning on any fundamental level. For many menial tasks like Optical Character Recognition and audio transcription, AI algorithms have become indispensable! Tasks like these are grunt work, and by no means is humanity worse off for finding ways to automate them. We can talk about the economic consequences or the quality of the results, sure, but there’s no fundamental reason this kind of work can’t be performed with Machine Learning.

AI art feels… different. Even ignoring where companies like OpenAI get their training data, there are a lot of reasons AI art makes people like me uneasy. Some of them are admittedly superficial, like the strange proportions or extra fingers, but there’s more to it than that.

The problem for me is baked into the very premise - making an AI to do our art only makes sense if art is just another task, just work that needs to be done. If sourcing images is just a matter of finding more grist for the mill, AI is a dream come true! That may sound a little harsh, and it is, but it’s true. Generative AI isn’t really art - art is supposed to express something, or mean something, or do something, and Generative AI is fundamentally incapable of functioning on this wavelength. All the AI works with is images - there’s no understanding of ideas like time, culture, or emotion. The entirety of the human experience is fundamentally inaccessible to generative AI simply because experience itself is inaccessible to it. An AI model can never go on a walk, or mow a lawn, or taste an apple, it’s just an image generator. Nothing it draws for us can ever really mean anything to us, because it isn’t one of us. Often times, I hear people talk about this kind of stuff almost like it’s just a technical issue, as if once they’re done rooting out the racial bias or blocking off the deepfake porn, then they’ll finally have some time to patch in a soul. When artist Jens Haaning mailed in 2 blank canvases titled “Take the Money and Run” to the Kunsten Museum of Modern Art, it was a divisive commentary on human greed, the nature of labor, and the nonsequitir pricing endemic to modern art. The knowledge that a real person at that museum opened the box, saw a big blank sheet, and had to stick it up on the wall, the fact that there was a real person on the other side of that transaction who did what they did and got away with it, the story around its creation, that is the art. If StableDiffusion gave someone a blank output, it’d be reported as a bug and patched within the week.

All that said, is AI image generation fundamentally wrong? Sure, the people trying to make money off of it are definitely skeevy, but is there some moral problem with creating a bunch of dumb, meaningless junk images for fun? Do we get to cancel Neil Cicierega because he wanted to know how Talking Heads frontman David Byrne might look directing traffic in his oversized suit?

Maybe just a teensy bit, at least under the current circumstances.

I’ll probably end up writing a part 2 about my thoughts on stuff like data harvesting and stuff, not sure yet. I feel especially strongly about the whole “AI is just another tool” discourse when people are talking about using these big models, so don’t even get me started on that.

  • yoink [she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 months ago

    i think the thing about AI art is that it really lays bare how alienated we really are from the means of production, that people are so unwilling to inject any amount of human effort and would rather have something created for them by a proprietary piece of software than ever suffer the embarrassment of even trying to create something, which only further feeds back into that alienation. It’s telling that one of the very first things to come out of AI art as a ‘movement’ is stuff like NFTs - the most low grade, mass producible ‘art’ possible, solely aimed at trying to extract money from other people.

    I’m also reminded constantly about the late 2000s/early 2010s discourse around ‘video games are art’ - in that a lot of this discussion is less about wanting to take a medium and genuinely bring it to a place where you can engage with it through artistic critique, and more wanting to steal a label and the perceived ‘respect’ of that label as a means to justify a consumer product

    • JohnBrownNote [comrade/them, des/pair]@hexbear.net
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 months ago

      that people are so unwilling to inject any amount of human effort

      nah i’m tired and don’t have thousands of hours to learn a skill just to make something*. you could say the same thing about wordprocessors taking away from writing, stand mixers taking away from baking, or cad and 3d printers from other kinds of making.

      *i don’t actually have a use right now for generated art but maybe someday there’ll be a blender plugin and i can dodge learning how baking textures actually works or something applicable to stuff i do work on.

      • yoink [she/her]@hexbear.net
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        3 months ago

        i don’t necessarily agree it’s the same thing as those examples, as the difference here is that the AI has to pull the art from somewhere i.e. directly from work someone else has done - if we’re talking writing, then you’d have to compare it to AI-written articles, or to plagiarising someone else’s novel for your own, both of which I feel the same way about

        the closest ‘real world’ example i’d say is maybe something like Collage, and sure that’s pulling from all sort of sources and I don’t think it’s any less valid for it, but at least there is something to be said about the person having to make decisions about what to use, where to place things that I just don’t see as comparable to AI art. But again, just my opinion.