We have paused all crawling as of Feb 6th, 2025 until we implement robots.txt support. Stats will not update during this period.

  • corsicanguppy@lemmy.ca
    link
    fedilink
    English
    arrow-up
    8
    ·
    20 hours ago

    stoped

    Well, they needed to stope. Stope, I said. Lest thy carriage spede into the crosseth-rhodes.

  • Semi-Hemi-Lemmygod@lemmy.world
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    1
    ·
    1 day ago

    Robots.txt is a lot like email in that it was built for a far simpler time.

    It would be better if the server could detect bots and send them down a rabbit hole rather than trusting randos to abide by the rules.

    • jagged_circle@feddit.nl
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      6 hours ago

      It is not possible to detect bots. Attempting to do so will invariably lead to false positives denying access to your content to what is usually the most at-risk & marginalized folks

      Just implement a cache and forget about it. If read only content is causing you too much load, you’re doing something terribly wrong.

    • poVoq@slrpnk.net
      link
      fedilink
      English
      arrow-up
      18
      ·
      1 day ago

      Because of AI bots ignoring robots.txt (especially when you don’t explicitly mention their user-agent and rather use a * wildcard) more and more people are implementing exactly that and I wouldn’t be surprised if that is what triggered the need to implement robots.txt support for FediDB.

    • mesamune@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      23
      ·
      1 day ago

      This looks more accurate than fedidb TBH. The initial serge from reddit back in 2023. The slow fall of active members. I personally think the reason the number of users drops so much is because certain instances turn off the ability for outside crawlers to get their user info.

    • mesamune@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      21
      ·
      1 day ago

      No idea honestly. If anyone knows, let us know! I dont think its necessarily a bad thing, If their crawler was being too aggressive, then it can accidentally DDOS smaller servers. Im hoping that is what they are doing and respecting the robot.txt that some sites have.

      • Ada@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        37
        ·
        1 day ago

        Gotosocial has a setting in development that is designed to baffle bots that don’t respect robots.txt. FediDB didn’t know about that feature and thought gotosocial was trying to inflate their stats.

        In the arguments that went back and forth between the devs of the apps involved, it turns out that FediDB was ignoring robots.txt. ie, it was badly behaved

          • Pika@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            6 hours ago

            might be in relates to issue link here

            It was a good read, personally speaking I think it probably would have just been better off to block gotosocial(if that’s possible since if seems stuff gets blocked when you check it) until proper robot support was provided I found it weird that they paused the entire system.

            Being said, if I understand that issue correctly, I fall under the stand that it is gotosocial that is misbehaving. They are poisoning data sets that are required for any type of federation to occur(node info, v1 and v2 statistics), under the guise that they said program is not respecting the robots file. Instead arguing that it’s preventing crawlers, where it’s clear that more than just crawlers are being hit.

            imo this looks bad, it defo puts a bad taste in my mouth regarding the project. I’m not saying an operator shouldn’t have to listen to a robots.txt, but when you implement a system that negatively hits third party, the response shouldn’t be the equivalent of sucks to suck that’s a you problem, your implementation should either respond zero or null, any other value and you are just being abusive and hostile as a program

            • mesamune@lemmy.worldOP
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              5 hours ago

              Thank you for providing the link. Im actually on GoToSocials side on this particular one. But I wish both sides would have communicated a bit more before this got rolled out.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        1 day ago

        I think it’s just one HTTP request to the nodeinfo API endpoint once a day or so. Can’t really be an issue regarding load on the instances.

          • hendrik@palaver.p3x.de
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            2
            ·
            edit-2
            1 day ago

            True. Question here is: if you run a federated service… Is that enough to assume you consent to federation? I’d say yes. And those Mastodon crawlers and statistics pages are part of the broader ecosystem of the Fediverse. But yeah, we can disagree here. It’s now going to get solved technically.

            I still wonder what these mentioned scrapers and crawlers do. And the reasoning for the people to be part of the Fediverse but at the same time not be a public part of the Fediverse in another sense… But I guess they do other things on GoToSocial than I do here on Lemmy.

            • JustAnotherKay@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              16 hours ago

              if you run a federated services… Is that enough to assume you consent

              If she says yes to the marriage that doesn’t mean she permanently says yes to sex. I can run a fully air gapped “federated” instance if I want to

              • hendrik@palaver.p3x.de
                link
                fedilink
                English
                arrow-up
                3
                ·
                edit-2
                12 hours ago

                Hmmh, I don’t think we’ll come to an agreement here. I think marriage is a good example, since that comes with lots of implicit consent. First of all you expect to move in together after you got engaged. You do small things like expect to eat dinner together. It’s not a question anymore whether everyone cooks their own meal each day. And it extends to big things. Most people expect one party cares for the other once they’re old. And stuff like that. And yeah. Intimacy isn’t granted. There is a protocol to it. But I’m way more comfortable to make the moves on my partner, than for example place my hands on a stranger on the bus, and see if they take my invitation…

                Isn’t that how it works? I mean going with your analogy… Sure, you can marry someone and never touch each other or move in together. But that’s kind of a weird one, in my opinion. Of course you should be able to do that. But it might require some more explicit agreement than going the default route. And I think that’s what happened here. Assumptions have been made, those turned out to be wrong and now people need to find a way to deal with it so everyone’s needs are met…

                I just can’t relate. Doesn’t being in a relationship change things? It sure did for me. And I surely act differently around my partner, than I do around strangers. And I’m pretty sure that’s how most people handle it. And I don’t even think this is the main problem in this case.

                • JustAnotherKay@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  10 hours ago

                  Going by your example

                  Air gapping my service is the agreement you’re talking about in this analogy, but otherwise I do actually agree with you. There is a lot of implied consent, but I think we have a near miss misunderstanding on one part.

                  In this scenario (analogies are nice but let’s get to reality) crawling the website to check the MAU, as harmless as it is, is still adding load to the server. A tiny amount, sure, but if you’re going to increase my workload by even 1% I wanna know beforehand. Thus, I put things on my website that say “don’t increase my workload” like robots.txt and whatnot.

                  Other people aren’t this concerned with their workload, in which case it might be fine to go with implied consent. However, it’s always best to follow the best practices and just make sure with the owner of a server that it’s okay to do anything to their server IMO

            • WhoLooksHere@lemmy.world
              link
              fedilink
              English
              arrow-up
              9
              ·
              edit-2
              1 day ago

              Why invent implied consent when complicit explicit has been the standard in robots.txt for ages now?

              Legally speaking there’s nothing they can do. But this is about consent, not legality. So why use implied?

              • hendrik@palaver.p3x.de
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                1 day ago

                I guess because it’s in the specification? Or absent from it? But I’m not sure. Reading the ActivityPub specification is complicated, because you also need to read ActivityStreams and lots of other references. And I frequently miss stuff that is somehow in there.

                But generally we aren’t Reddit where someone just says, no we prohibit third party use and everyone needs to use our app by our standards. The whole point of the Fediverse and ActivityPub is to interconnect. And to connect people across platforms. And it doen’t even make lots of assumptions. The developers aren’t forced to implement a Facebook clone. Or do something like Mastodon or GoToSocial does or likes. They’re relatively free to come up with new ideas and adopt things to their liking and use-cases. That’s what makes us great and diverse.

                I -personally- see a public API endpoint as an invitation to use it. And that’s kind of opposed to the consent thing. But I mean, why publish something in the first place, unless it comes with consent?

                But with that said… We need some consensus in some areas. There are use cases where things arent obvious from the start. I’m just sad that everyone is ao agitated and seems to just escalate. I’m not sure if they tried talking to each other nicely. I suppose it’s not a big deal to just implement the robots.txt and everyone can be happy. Without it needing some drama to get there.

                • WhoLooksHere@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  6
                  ·
                  edit-2
                  1 day ago

                  Robots.txt started I’m 1994.

                  It’s been a consensus for decades.

                  Why throw it out and replace it with imied consent to scrape?

                  That’s why I said legally there’s nothing they can do. If people want to scrape it they can and will.

                  This is strictly about consent. Just because you can doesn’t mean you should yes?

                  I guess I haven’t read a convincing argument yet why robots.txt should be ignored.

                • jmcs@discuss.tchncs.de
                  link
                  fedilink
                  English
                  arrow-up
                  5
                  ·
                  1 day ago

                  You can consent to a federation interface without consenting to having a bot crawl all your endpoints.

                  Just because something is available on the internet it doesn’t mean all uses are legitimate - this is effectively the same problem as AI training with stolen content.

      • Rimu@piefed.social
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        1 day ago

        Maybe the definition of the term “crawler” has changed but crawling used to mean downloading a web page, parsing the links and then downloading all those links, parsing those pages, etc etc until the whole site has been downloaded. If there were links going to other sites found in that corpus then the same process repeats for those. Obviously this could cause heavy load, hence robots.txt.

        Fedidb isn’t doing anything like that so I’m a bit bemused by this whole thing.