• noodle@feddit.uk
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    2
    ·
    1 year ago

    Almost certainly this isn’t anything to do with scraping. Like with Reddit, those with a stake in Twitter stand to benefit from AI and, as far as I know, there’s no mass reposting (retweeting?) effort to something like Mastodon.

    That would be trivial to block anyway, since it would be easy to identity the service accounts and source IP’s of the requests. No need to impact average users.

    What’s more likely is he hasn’t paid the bill for his cloud infrastructure and no longer has the capacity to serve so many users.

    IMO, that’s what you get when you fire half of your staff.

    • oatscoop@midwest.social
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      IMO, that’s what you get when you fire half of your staff.

      And pander to extremists, drive advertisers away, refuse to pay your bills, etc.

    • Stallone@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      3
      ·
      1 year ago

      I’m not so sure, there are a lot of businesses and people training their AI models right now and sites like reddit or twitter are very attractive huge collections of user generated content. It’s not the most outrageous assumption that they’ll try to get that data for free by scraping instead of paying for API access.

      • Veddit@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        1 year ago

        But also, hasn’t that boat left already for several AI companies? They’ve already trained it up, no need to scrape again, they just use what they got last time for their core training, it’s only the last couple of years/months they’re missing.

      • sergih123@eslemmy.es
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 year ago

        I don’t think however, that it is that hard to differentiate an AI scraper between an actual user, since AI scrapers would be scraping huge amounts of data, which the average user doesn’t. Correct me if I’m wrong. wdyt

        • noodle@feddit.uk
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          1 year ago

          No, you’re correct. Service accounts can consume data way faster than a human user ever could. A smart business always implements rate limits or you could bankrupt them with a simple curl command. They could bankrupt themselves in testing with a simple loop!

          This can be fixed in many ways, not just by putting limitations on credentials but also on source addresses. If a certain address or range of addresses seems to be running multiple service accounts and pulling huge amounts of data, you can deny requests from those IP’s.

          In short, this AI angle smells like BS to save face. Musk effectively fired the SRE team who looked after critical infrastructure. It was their job to ensure service reliability, so it should not be a surprise that Twitter now has issues with service reliability.

          • Billiam@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            They could bankrupt themselves in testing with a simple loop!

            You mean exactly like what Twitter did this past weekend?

    • Bilbo@vlemmy.net
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      That would be trivial to block anyway

      Is just ridiculously false. If you think it is true, make a service to do this trivial thing for people, and become a millionaire overnight.

      • noodle@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Funnily enough, I do. I’m an SRE myself.

        Services like Akamai have tools that are literally designed to block requests from known bad locations and IP ranges.