AI Industry Struggles to Curb Misuse as Users Exploit Generative AI for Chaos::Artificial intelligence just can’t keep up with the human desire to see boobs and 9/11 memes, no matter how strong the guardrails are.

  • bitsplease
    link
    fedilink
    English
    arrow-up
    91
    arrow-down
    1
    ·
    1 year ago

    Serious question - why should anyone care about using AI to make 9/11 memes? Boobs I can see the potential argument against at least (deep fakes and whatnot), but bad taste jokes?

    Are these image generation companies actually concerned they’ll be sued because someone used their platform to make an image in bad taste? Even if such a thing we’re possible, wouldn’t the responsibility be on the person who made it? Or at worst the platform that distributed the images -As opposed to the one that privately made it?

    • Fyurion@lemmy.world
      link
      fedilink
      English
      arrow-up
      78
      ·
      1 year ago

      I don’t see adobe trying to stop people from making 911 memes in photoshop nor have they been sued over anything like that, I dont get why AI should be different. It’s just a tool.

      • bitsplease
        link
        fedilink
        English
        arrow-up
        21
        ·
        1 year ago

        That’s a great analogy, wish I’d thought of it

        I guess it comes down to whether the courts decide to view AI as a tool like photoshop, or a service - like an art commission. I think it should be the former, but I wouldn’t be at all surprised if the dinosaurs in the US gov think it’s the latter

      • makyo@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        1 year ago

        The problem for Adobe is that the AI work is being done on their computers, not yours, so it could be argued that they are liable for generated content. ‘Could’ because it’s far from established but you can imagine how nervous this all must make their lawyers.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      1 year ago

      Protect the brand. That’s it.

      Microsoft doesn’t want non-PC stuff being associated with the Bing brand.

      It’s what a ton of the ‘safety’ alignment work is about.

      This generation of models doesn’t pose any actual threat of hostile actions. The “GPT-4 lied and said it was human to try to buy chemical weapons” in the safety paper at release was comical if you read the full transcript.

      But they pose a great deal of risk to brand control.

      Yet still apparently not enough to run results through additional passes which fixes 99% of all these issues, just at 2-3x the cost.

      It’s part of why articles like these are ridiculous. It’s broadly a solved problem, it’s just the cost/benefit of the solution isn’t enough to justify it because (a) these issues are low impact and don’t really matter for 98% of the audience, and (b) the robust fix is way more costly than the low hanging fruit chatbot applications can justify.

      • Terrasque@infosec.pub
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Microsoft doesn’t want non-PC stuff being associated with the Bing brand.

        You mean bing, the porn Google? Yeah, that might be a tad too late

    • M500
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      1 year ago

      I’d guess that they are worried the IP owners will sue them for singing their IP.

      So sonic creators will say, your profiting by using sonic and not paying us for the right to use him.

      But I agree that deep fakes can be pretty bad.