Is anyone surprised Google doesn’t even try to fix issues that are damaging its users?

  • @ndarwincorn
    link
    42 years ago

    Google refuses to respect a robots.txt here, seems awful naive to assume they will respect a 503 response or retry-after header.

    Similarly naive to assume there’s no ddos mitigation in front of sourcehut, given that Drew explained why he allows the proxy traffic through unabated.

    To then take that naive assumption and leap off it to conclusions about the production readiness of alpha software is some wild FUD.

    • @X_Cli
      link
      2
      edit-2
      2 years ago

      I don’t think that a robots.txt file is the appropriate tool here.

      First off, robots.txt are just hints for respectful crawlers. Go proxies are not crawlers. They are just that: caching proxies for Go modules. If all Go developers were to use direct mode, I think the SourceHut traffic would be more, not less.

      Second, let’s assume that Go devs would be willing to implement something to be mindful of robots.txt or retry-after indications. Would attackers do? Of course not.

      If a legitimate although quite aggressive traffic is DDoSing SourceHut, that is primarily a SourceHut issue. Returning a 503 does not have to be respected by the client because the client has nothing to respect: the server just choose to say “I don’t want to answer that request. Good Bye”. This is certainly not a response that is costly to generate. Now, if the server tries to honor all requests and is poorly optimized, then the fault is on the server, not the client.

      I have not read in details the Go Proxy implementation, to be truthful. I don’t know how it would react if SourceHut was answering 503 status code every now and then, when the fetching strategy is too aggressive. I would simply guess that the server would retry later and serve the Go developers a stale version of the module.