Given how Reddit now makes money by selling its data to AI companies, I was wondering how the situation is for the fediverse. Typically you can block AI crawlers using robot.txt (Verge reported about it recently: https://www.theverge.com/24067997/robots-txt-ai-text-file-web-crawlers-spiders). But this only works per domain/server, and the fediverse is about many different servers interacting with each other.

So if my kbin/lemmy or Mastodon server blocks OpenAI’s crawler via robot.txt, what does that even mean when people on other servers that don’t block this crawler are boosting me on Mastodon, or if I reply to their posts. I suspect unless all the servers I interact with block the same AI crawlers, I cannot prevent my posts from being used as AI training data?

  • pop
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    6
    ·
    10 months ago

    We’re sick of closed walled-garden monoliths like Reddit! Let’s move to an open federated protocol where anyone can participate and the APIs can’t be locked down!

    Can you point to where the fediverse collectively said that? Speak for yourself and don’t act like fediverse was designed to suit your definition of freedom. The fediverse is open and federated as in, there are multiple instances and owners without a centralized administration and the owners who hosts those instances decide what to lock down.

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      5
      ·
      10 months ago

      And some of those hosts can decide to serve up their content to AI trainers. Some of those hosts can be run by AI trainers, specifically to gather data for training. If one was to try to prevent that then one would be attacking the open nature of the fediverse.

      There have been many people raging about their content being used to train AIs without permission or compensation. I’m speaking to those people, not the “fediverse collectively”. As you suggest, the fediverse can’t say anything collectively.