I host a few small low-traffic websites for local interests. I do this for free - and some of them are for a friend who died last year but didn’t want all his work to vanish. They don’t get so many views, so I was surprised when I happened to glance at munin and saw my bandwidth usage had gone up a lot.

I spent a couple of hours working to solve this and did everything wrong. But it was a useful learning experience and I thought it might be worth sharing in case anyone else encounters similar.

My setup is:

Cloudflare DNS -> Cloudflare Tunnel (Because my residential isp uses CGNAT) -> Haproxy (I like Haproxy and amongst other things, alerts me when a site is down) -> Separate Docker containers for each website. On a Debian server living in my garage.

From Haproxy’s stats page, I was able to see which website was gathering attention. It’s one running PhpBB for a little forum. Tailing apache’s logs in that container quickly identified the pattern and made it easy to see what was happening.

It was seeing a lot of 404 errors for URLs all coming from the same user-agent “claudebot”. I know what you’re thinking - it’s an exploit scanning bot, but a closer look showed it was trying to fetch normal forum posts, some which had been deleted months previously, and also robots.txt. That site doesn’t have a robots.txt so that was failing. What was weird is that the it was requesting at a rate of up to 20 urls a second, from multiple AWS IPs - and every other request was for robots.txt. You’d think it would take the hint after a million times of asking.

Googling that UA turns up that other PhpBB users have encountered this quite recently - it seems to be fascinated by web forums and absolutely hammers them with the same behaviour I found.

So - clearly a broken and stupid bot, right? Rather than being specifically malicious. I think so, but I host these sites on a rural consumer line and it was affecting both system load and bandwidth.

What I did wrong:

  1. In docker, I tried quite a few things to block the user agent, the country (US based AWS, and this is a UK regional site), various IPs. It took me far too long to realise why my changes to .htaccess were failing - the phpbb docker image I use mounts the root directory to the website internally, ignoring my mounted vol. (My own fault, it was too long since I set it up to remember only certain sub-dirs were mounted in)

  2. Figuring that out, I shelled into the container and edited that .htaccess, but wouldn’t have survived restarting/rebuilding the container so wasn’t a real solution.

Whilst I was in there, I created a robots.txt file. Not surprisingly, claudebot doesn’t actually honour whats in there, and still continues to request it ten times a second.

  1. Thinking there must be another way, I switched to Haproxy. This was much easier - the documentation is very good. And it actually worked - blocking by Useragent (and yep, I’m lucky this wasn’t changing) worked perfectly.

I then had to leave for a while and the graphs show it’s working. (Yellow above the line is requests coming into haproxy, below the line are responses).

Great - except I’m still seeing half of the traffic, and that’s affecting my latency. (Some of you might doubt this, and I can tell you that you’re spoiled by an excess of bandwidth…)

  1. That’s when the penny dropped and the obvious occured. I use cloudflare, so use their firewall, right? No excuses - I should have gone there first. In fact, I did, but I got distracted by the many options and focused on their bot fighting tools, which didn’t work for me. (This bot is somehow getting through the captcha challenge even when bot fight mode is enabled)

But, their firewall has an option for user agent. The actual fix was simply to add this in WAF for that domain.

And voila - no more traffic through the tunnel for this very rude and stupid bot.

After 24 hours, Cloudflare has blocked almost a quarter of a million requests by claudebot to my little phpbb forum which barely gets a single post every three months.

Moral for myself: Stand back and think for a minute before rushing in and trying to fix something in the wrong way. I’ve also taken this as an opportunity to improve haproxy’s rate limiting internally. Like most website hosts, most of my traffic is outbound, and slowing things down when it gets busy really does help.

This obviously isn’t a perfect solution - all claudebot has to do is change its UA, and by coming from AWS it’s pretty hard to block otherwise. One hopes it isn’t truly malicious. It would be quite a lot more work to integrate Fail2ban for more bots, but it might yet come to that.

Also, if you write any kind of web bot, please consider that not everyone who hosts a website has a lot of bandwidth, and at least have enough pride to write software good enough to not keep doing the same thing every second. And, y’know, keep an eye on what your stuff is doing out on the internet - not least for your own benefit. Hopefully AWS really shaft claudebot’s owners with some big bandwidth charges…

EDIT: It came back the next day with a new UA, and an email address linking it to anthropic.com - the Claude3 AI bot, so it looks like a particularly badly written scraper for AI learning.

    • Deebster
      link
      fedilink
      1514 days ago

      I was kinda hoping for another story about some clever compression bomb or similar to slow up the bot - after all, if it’s hammering this little site it’s surely doing the same to others, even if they haven’t noticed yet. After the robots.txt was ignored I was sure, but I guess this mature, restrained response is probably the correct one *discontentedly kicks can down sepia street*

    • @digdilemOP
      link
      1114 days ago

      Some nice evil ideas there!

  • Daniel Quinn
    link
    fedilink
    English
    24
    edit-2
    14 days ago

    Not throwing any shade, just some advice for the future: try to always consider the problem in the context of the OSI model. Specifically, “Layer 3” (network) is always a better strategy for routing/blocking than “Layer 5” (application) if you can do it.

    Blocking traffic at the application layer means that the traffic has to be routed through (bandwidth consumption) assembled and processed (CPU cost) before a decision can be made. You should always try to limit the stuff that makes it to layer 5 if you’re sure you won’t want it.

    The trouble with layer 3 routing of course is that you don’t have application data there. No host name, no HTTP headers, etc., just packets with a few bits of information:

    • source IP and port
    • destination IP and port
    • A few other firewall-specific bits of information like whether this packet is part of an established connection (syn) etc.

    In your case though, you already knew what you didn’t want: traffic from a particular IP, and you have that at the network layer.

    At that point, you know you can block at layer 3, so the next question is how far up the chain can you block it?

    Most self-hosters will just have their machines on the open internet, so their personal firewall is all they’ve got to work with. It’s still better than letting the packets all the way through to your application, but you still have to suffer the cost of dropping each packet. Still, it’s good enough™ for most.

    In your case though, you had setup the added benefit of Cloudflare standing between you and your server, so you could move that decision making step even further away from you, which is pretty great.

    • @xthexder@l.sw0.com
      link
      fedilink
      814 days ago

      I learned this in highschool when I discovered sending ping floods from a 1gbit VPS to a slow residential Internet connection can take down your Internet even if the router doesn’t respond to pings. The bandwidth still all needs to make it to the router in your house to be dropped.

      • Possibly linux
        link
        fedilink
        English
        314 days ago

        Now that’s interesting. I know that i2p can crash some cheap routers because they run out of ram. I wonder if you could do that from the outside.

    • @digdilemOP
      link
      213 days ago

      Yep - agree with all of that. It’s a fault of mine that I don’t always step back and look at the bigger picture first.

    • Deebster
      link
      fedilink
      1314 days ago

      I feel a company that big would write a more competent bot, but I also wouldn’t be too astonished.

    • @digdilemOP
      link
      714 days ago

      Maybe? It feels like the kind of stupid that you really need a human to half-ass it to achieve this thoroughly though.

    • @digdilemOP
      link
      613 days ago

      It’s back today with a new user-agent, this time containing an email address at anthropic.com - so it looks like it’s Claude3, a scraper for an AI bot.

  • @Nibodhika@lemmy.world
    link
    fedilink
    1513 days ago

    I know you couldn’t do that because you have data limits in the US, but my first instinct would have been to put an Ubuntu iso as the robots.txt (or better yet, point it to /dev/urandom) let that bot download GB of data to fuck with his connection/disk.

    Probably shouldn’t do that though, and blocking it on Cloudflare is the correct approach.

    • @toastal
      link
      1013 days ago

      Letting Cloudflare centralize the internet isn’t always the solution. I’m sick of hCAPTCHAs just for living in a non-Western country.

      • @greywolf0x1
        link
        513 days ago

        same here, i can’t seem to do anything on cuckflared sites because i’m on a vpn…

  • @just_another_person@lemmy.world
    link
    fedilink
    14
    edit-2
    14 days ago

    Good tips for beginners who don’t stare at this stuff all day.

    One extra tip for you: just script blocking these things after they act up, but before they cost real money. You know your expected traffic patterns, so setting thresholds should be easy.

    Fail2ban is tried and true, and dead simple, or you could use something a bit fancier like crowdsec to setup chains that send IPs to block directly at the WAF. Getting some of the more popular blacklists at the edge would be a good idea as well.

    • @digdilemOP
      link
      714 days ago

      Fail2ban is something I’ve used for years - in fact it was working on these very sites before I decided to dockerise them, but find it a lot less simple in this application for a couple of reasons:

      The logs are in the docker containers. Yes, I could get them squirting to a central logging serverbut that’s a chunk of overhead for a home system. (I’ve done that before, so it is possible, just extra time)

      And getting the real IP through from cloudlfare. Yes, CF passes headers with it in, and haproxy can forward that as well with a bit of tweaking. But not every docker container for serving webpages (notably the phpbb one) will correctly log the source IP even when passed through from Haproxy as the forwarded-ip, instead showing the IP of the proxy. I’ve other containers that do display it, and it can obviously be done, but I’m not clear yet why it’s inconsistent. Without that, there’s no blocking.

      And… You can use the cloudflare IP to block IPs, but there’s a fixed limit on the free accounts. When I set this up before with native webservers and blocked malicious url scanning bots, then using the api to block them - I reached that limit within a couple of days. I don’t think there’s automatic expiry, so I’d need to find or build a tool that manages the blocklist remotely. (Or use haproxy to block and accept the overhead)

      It’s probably where I should go next.

      And yes - you’re right about scripting. Automation is absolutely how I like to do things. But so many problems only become clear retrospectively.

      • @digdilemOP
        link
        714 days ago

        Doh - another example of my muddled thinking.

        Fail2ban will work directly on haproxy’s log, no need to read the web logs from containers at all. Much simpler and better.

        • Para_lyzed
          link
          fedilink
          413 days ago

          Yeah, I’ve always used fail2ban on the main server with my dockerized services. Works great, and requires very little work.

  • StarDreamer
    link
    fedilink
    English
    10
    edit-2
    13 days ago

    I’ve recently moved from fail2ban to crowdsec. It’s nice and modular and seems to fit your use case: set up a http 404/rate-limit filter and a cloudflare bouncer to ban the IP address at the cloudflare level (instead of IPtables). Though I’m not sure if the cloudflare tunnel would complicate things.

    Another good thing about it is it has a crowd sourced IP reputation list. Too many blocks from other users = preemptive ban.

    • @digdilemOP
      link
      313 days ago

      Thanks, I’ve not heard of that, it sounds like it’s worth a look.

      I don’t think the tunnel would complicate blocking via the cloudflare api, but there is a limit on the number of IPs you can ban that way, so some expiry rules are necessary.

      • StarDreamer
        link
        fedilink
        English
        3
        edit-2
        13 days ago

        Pretty sure expiry is handled by the local crowdsec daemon, so it should automatically revoke rules once a set time is reached.

        At least that’s the case with the iptables and nginx bouncers (4 hour ban for probing). I would assume that it’s the same for the cloudflare one.

        Alternatively, maybe look into running two bouncers (1 local, 1 CF)? The CF one filters out most bot traffic, and if some still get through then you block them locally?

        • @digdilemOP
          link
          313 days ago

          I’ve just installed crowdsec and its haproxy plugin. Documentation is pretty good. I need to look into getting it to ban the ip at cloudflare - that would be neat.

          Annoyingly, the claudebot spammer is back again today with a new UA. I’ve emailed the address within it politely asking them to desist - be interesting to see if there’s a reply. And yes, it is Claudebot 3 - AI.

          UA:like Gecko; compatible; ClaudeBot/1.0; +claudebot@anthropic.com)

          • StarDreamer
            link
            fedilink
            English
            313 days ago

            iirc the bad UA filter is bundled with either base-http-scenarios or nginx. That might help assuming they aren’t trying to mask that UA.

  • Deebster
    link
    fedilink
    1014 days ago

    Thinking there must be another way, I switched to Haproxy.

    Hang on, weren’t you on Haproxy already? Or do you mean you switched your attention to Haproxy? (If not, what were you in before?)

    As others have said, blocking incoming stuff as high up as possible is definitely the right way, and Cloudflare is the right place for you. It’s interesting that this bot wasn’t caught by Cloudflare, I wonder who runs it.

    • @digdilemOP
      link
      614 days ago

      I mean - I switched my attention to Haproxy. And yes, no argument there.

    • Atemu
      link
      213 days ago

      While I wouldn’t put it past tech bros to use such unethical measures for their latest grift, it’s not a given that it’s actually claudebot. Anyone can claim to be claudebot, googlebot, boredsquirrelbot or anything else. In fact it could very well be a competitor aiming to harm Claude’s reputation.