Wanting to profit from AI companies hunt for training data (over and above the community that created that data) is a big part of what created the context for the recent migration away from Reddit. How will the fediverse approach this problem?

  • key@lemmy.keychat.org
    link
    fedilink
    arrow-up
    14
    ·
    1 year ago

    By not spitting into the wind. It’s infeasible to try to prevent all web scraping from any possible IP which is what you would need to do. Reddit just took advantage of the media topic as a justification, they’re not doing anything real.

    • sachasage@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Fair, but then there’s a line between scraping through ordinary traffic and using API access to gather large data sets.

      • key@lemmy.keychat.org
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        Is there? Effect is the same. Use machine learning to parse html generically and throw hardware and a pool of IPs at it. A lot more efficient than coding an API client for every service out there. It’s the same approach search engines use.

        I don’t see anything being done effectively without legal protections.

  • fubo@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    1 year ago

    Is your web site indexable by search engines?

    The way that works is they make a complete copy of all the public content on the site — anything that a non-logged-in user can see — and then use that for indexing. Googlebot, BaiduSpider, Bingbot, DuckDuckBot, etc. simply copy the public data from your site onto those companies’ own servers.

    Once they’ve done that, they can do anything with that data, without further interaction with your site.

    That includes using it for ML/AI training.

    You cannot technologically prevent that without becoming invisible to search engine indexing. That means not being public on the web.

    Your choice. You can’t both be public and not public. You can’t be both indexable and not indexable.

    Public federation requires being public. Which thereby requires being indexable, which thereby means everything written here can be ingested into training pipelines.

    That’s simply true. It’s not good or bad; it’s just true. Your alternative is to not post your words on the public web.

  • bulwark@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    I think it’s inevitable. They’re going to scrape every shred of data they can access. I would prefer to interact with real people tho.

      • Rikudou_Sage@lemmings.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Iptables is a very powerful (and complicated) firewall for Linux (and possibly other systems?). You can block on crazy amount of rules (and even more with some optional modules). Blocking specific sets of IPs is one possible solution, yes.

        • sachasage@lemmy.worldOP
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          Seems like it would quickly become a bit of an arms race with measures and counter measures unless some legislation went into effect

          • rolaulten@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            The complexity of iptables should not be understated. IMO It should be treated as a firewall if last resort - because there is a very high chance the next person to maintain the system after you will not understand all of your implementation