• Prunebutt@slrpnk.net
    link
    fedilink
    arrow-up
    6
    arrow-down
    4
    ·
    7 months ago

    Because AI art, as it is commonly used nowadays lacks intentionality (the thing that makes a urinal art).

    If I read a book, I used to know that every word was put there by the author with intent. If iI read AI generated text, it doesn’t convey anything that a human has put out there for me to experience. I’m looking at formatted output of stochastic models.

    • Monument@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      7 months ago

      That’s a good point.

      I’m thinking of art in the visual sense, and of the creator being a person who is prompting the image generator - which I think meets the intentionality standard.
      But, there are a lot of ways folks can use AI tools that aren’t intentional, and I haven’t been considering that.

      My stance isn’t 100% changed, but I will start considering intentionality.


      Related, but maybe not.

      Some years ago, I was a slightly older student with a deep well of photography experience entering a newer graphic design program, and some of it seemed amaturish to the point of being a joke to me. My “Digital Art” class was like that, where the average assignment was to cut and paste things together and apply x number of Photoshop filters to them. It was an easy A, so whatever. I remember for one of those assignments, I just took it as an opportunity to digitize some prints I’d made. I had taken some black and white shots at night of a local train station, which is pretty scenic, and considered a landmark. They were moody, and foreboding, also slightly soft because I don’t have great darkroom technique. I pumped up the brightness, threw on like a papercut/rough edges filter, and layered the whole thing with a not transparent blue gradient that made for this sort of cyanotype3 effect. Later that year, we were told to submit something to a student art show, and I printed that assignment out on the student printer. I might have been first, because the printer hadn’t run in awhile, and the blue print head was sort of clogged, so the thing came out this shade of green instead, because the cyan didn’t print heavily. (But it didn’t band, either, so…) I submitted that because I didn’t want to pay to reprint it, and that was that.

      At the art show, someone asked me about it, and I told them that I had initially done it this way for a project. I liked the blue for some reason I now forget, but then it printed incorrectly, and I liked that too, so I didn’t reprint it. I may have even said something cute about not being able to intentionally reproduce that print failure (they cleaned the machine right after my ‘failed’ print), so it’s sort of bespoke.

      A peer later asked why I didn’t just say that was intentional, and make up an excuse. And I sort of lost respect for him. Because that wasn’t my intent.

      Which is to say I guess I respect even unintentional screw ups, so long as their presentation isn’t wrapped in falsehoods.

      A book that is AI generated that was minimally edited and not really written by the person on the byline, then passed off as human work is not art, it’s just fraud. An AI generated book created with prompts from someone who knows how to write, then edited well to eliminate the AI weirdness, and then indicates the writing was largely done by LLM’s - well, I guess I think that’s art.
      AI art passed off as traditional art, or AI art that’s not intentional and passed off as intentional is a fraud.

      I guess that’s how your very good point fits in my conceptual framework. If it’s not offered in good faith as art, and explained as art, then it’s fraud. But AI art offered in good faith is art.

      Edit:
      I’m sorry some folks are downvoting you. You’ve been respectful and open minded our whole interaction.

    • Asafum@feddit.nl
      link
      fedilink
      arrow-up
      2
      ·
      7 months ago

      But how would you know an AI generated the text? Some current technology isn’t 100% perfect, but they’re trained to recreate human linguistic patterns based on actual human inputs. If we had a model that was only trained on the “great works” of history I wonder how difficult it would be to determine if an AI wrote it or a human.

      • Prunebutt@slrpnk.net
        link
        fedilink
        arrow-up
        2
        ·
        7 months ago

        Why would you want that, except for maybe putting authors out of a job while still making money from regurgitated drivel?

        I don’t want to read AI generated text, because that doesn’t put me into a state where an author communicates with me.

        • Asafum@feddit.nl
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          7 months ago

          I never said I wanted it, I was more curious about how you would know that there was intent or not if you couldn’t tell an AI made it.

          If you enjoyed what you read you might believe there was intent when there really was none and I don’t know if that really matters. Your interpretation of the media could still be important to you if it had any impact on you.