Nearly a year after AI-generated nude images of high school girls upended a community in southern Spain, a juvenile court this summer sentenced 15 of their classmates to a year of probation.

But the artificial intelligence tool used to create the harmful deepfakes is still easily accessible on the internet, promising to “undress any photo” uploaded to the website within seconds.

Now a new effort to shut down the app and others like it is being pursued in California, where San Francisco this week filed a first-of-its-kind lawsuit that experts say could set a precedent but will also face many hurdles.

  • popcap200
    link
    fedilink
    arrow-up
    15
    ·
    25 days ago

    I can’t imagine the difficulty of resolving this, especially since most of the AI models are available for free use.

    • teejay@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      25 days ago

      Yeah it’s a chimera hydra, similar to illegal movie streaming sites. Unless they solve it at the AI engine level, they’re just chasing ghosts.

      • conciselyverbose@sh.itjust.works
        link
        fedilink
        arrow-up
        8
        arrow-down
        1
        ·
        25 days ago

        You won’t prevent it without (or even with) unacceptable restrictions on free speech. Those models have a right to exist.

        But you can raise the barrier to entry so people will need to run their own service to do it. You’ll make a crazy dent in middle school kids spreading fake nudes of their classmates if they can’t just use a managed online service.