I have an obnoxious problem with crawlers eating bandwidth on my personal web site—not just the fact that crawlers consume so much bandwidth, but rather a behaviour that is absolutely next-level.
-
The exact thing has happened to me recently with the tags. I now require users to log in to filter by multiple tags and I've blocked the subnets of the bots
If I wanted to allow guest users to search by multiple tags, I'd probably try the following options - (1) changing it to a POST request (2) requiring JavaScript (3) using Anubis (4) looking into ip masked rate limiting, so a rate limit for like multiple ip addresses in the same block
I wrote a blog post about my situation here https://blog.rubenwardy.com/2026/04/16/contentdb-ddos/
For your particular case, you should return a 404 if the URL contains both 2025 and 2026. This would stop them getting into invalid combinations. You can make it so the UI never links to these combinations by *replacing* rather than appending years if one already exists
-
So it loads my gallery page, and sees the list of tags: maybe 50 different links, all of which load the gallery page with a new filter applied. So it loads one, like "?tag=2026".
On the resulting page, there are still 50-odd tag links available. So it loads another one, and the URL now includes "?tag=2026%2C2025". Which is nonsense, but the page still loads.
Well, there are 0 images to show on that page, but still more tags to open! So next the bot opens "?tag=2026%2C2025%2C2024"...
🧵5/?
How many permutations of tags are there? A butttonne, and the bot will diligently check out ALL OF THEM. Thousands and thousands of page loads! And even though all of them have 0 images to display, there will still be a tag list to choose from, and it will always visually update to indicate which tags are currently selected. So the page can't just be saved in a static HTML file, and the bot isn't going to load anything from it's own cache.
🧵6/?
-
@jsstaedtler an easy way to catch this is that these scrapers generally don't send Referer headers, so you can kill these by checking that a valid Referer header is present in tag search. This will have false positives for humans that try to be too smart though.
@jsstaedtler (talking from experience with my self-hosted gitweb for this, BTW)
-
@jsstaedtler an easy way to catch this is that these scrapers generally don't send Referer headers, so you can kill these by checking that a valid Referer header is present in tag search. This will have false positives for humans that try to be too smart though.
@oblomov @jsstaedtler the referer header only exists for tracking, so many privacy-conscious people configure their browsers not to send it
the referer header should not exist in the first place -
How many permutations of tags are there? A butttonne, and the bot will diligently check out ALL OF THEM. Thousands and thousands of page loads! And even though all of them have 0 images to display, there will still be a tag list to choose from, and it will always visually update to indicate which tags are currently selected. So the page can't just be saved in a static HTML file, and the bot isn't going to load anything from it's own cache.
🧵6/?
I'm not fundamentally opposed to web crawlers, I would actually love it if my work is more discoverable. But this is such an obnoxious situation that I'm forced to accomodate or protect against.
I'm starting to think I need to test for mutually exclusive tags, and if two or more are selected, the resulting page will have no links at all except one to go back. That will deny the bots any more links to dive into.
But maybe there are better options? I'd wager this is not a novel issue...
🧵7/7
-
I'm not fundamentally opposed to web crawlers, I would actually love it if my work is more discoverable. But this is such an obnoxious situation that I'm forced to accomodate or protect against.
I'm starting to think I need to test for mutually exclusive tags, and if two or more are selected, the resulting page will have no links at all except one to go back. That will deny the bots any more links to dive into.
But maybe there are better options? I'd wager this is not a novel issue...
🧵7/7
@jsstaedtler a dumb solution would be to tell robots to not index the page (robots meta tag) if there is any tag queries, which i assume you can do via PHP.
edit: or if you want individual tags indexed, at least reject robots for queries of more than one tag?
-
@jsstaedtler a dumb solution would be to tell robots to not index the page (robots meta tag) if there is any tag queries, which i assume you can do via PHP.
edit: or if you want individual tags indexed, at least reject robots for queries of more than one tag?
Many crawlers ignore this in my experience, especially the AI ones
-
@jsstaedtler I can't remember - are you self-hosting or using a paid host?
@vga256 I'm sharing a paid host with a friend. Thanks to relatively low combined popularity, we can get away with a cheap plan, but I really don't want random bots to ruin that
-
For your particular case, you should return a 404 if the URL contains both 2025 and 2026. This would stop them getting into invalid combinations. You can make it so the UI never links to these combinations by *replacing* rather than appending years if one already exists
To block the abusive subnets, I used this tool to look up the IP ranges from example IP addresses. You can see all the IP ranges for a particular host: https://www.whatismyip.com/asn/AS150436/
I then blocked using ipset/iptables but other options exist depending on your setup
-
Many crawlers ignore this in my experience, especially the AI ones
@rubenwardy @jsstaedtler it would at least help with the legitimate ones!
-
@rubenwardy @jsstaedtler it would at least help with the legitimate ones!
Ah yes, worth doing as it also improves your SEO by not having thousands of similar pages
-
@jsstaedtler a dumb solution would be to tell robots to not index the page (robots meta tag) if there is any tag queries, which i assume you can do via PHP.
edit: or if you want individual tags indexed, at least reject robots for queries of more than one tag?
@redstrate Ah, this sounds promising! I don't want to make my site invisible on the greater Web by blocking all bot crawlers, but I'd be fine with them only loading URLs with no queries/parameters (anything after a ?). I'll look into that meta tag, though I acknowledge the other reply here that bots can happily ignore that.
-
I'm not fundamentally opposed to web crawlers, I would actually love it if my work is more discoverable. But this is such an obnoxious situation that I'm forced to accomodate or protect against.
I'm starting to think I need to test for mutually exclusive tags, and if two or more are selected, the resulting page will have no links at all except one to go back. That will deny the bots any more links to dive into.
But maybe there are better options? I'd wager this is not a novel issue...
🧵7/7
@jsstaedtler This, I think, is why so many people have moved to having Cloudflare in front of their sites. To block/limit badly behaved bots.
-
@vga256 I'm sharing a paid host with a friend. Thanks to relatively low combined popularity, we can get away with a cheap plan, but I really don't want random bots to ruin that
@jsstaedtler ah okay. i imagine that probably limits you from any making any apache/nginx configuration settings changes (e.g. IP blocklists)
i'm not familiar with your site generation code - but if you wrote it yourself, i *think* the trick would be to have it 404 when an incorrect tag has been used
How to create an error 404 page using PHP?
My file .htaccess handles all requests from /word_here to my internal endpoint /page.php?name=word_here. The PHP script then checks if the requested page is in its array of pages. If not, how can I
Stack Overflow (stackoverflow.com)
at least then the script can die() instead of yielding output. it's anyone's guess if the crawler will still continue to try generating tags when it has encountered a 404, but i *assume* they're built to avoid 404s
-
@jsstaedtler I've been using Iocaine, which is specifically intended to mess with AI bots, but it can also help with "normal" bots too
https://iocaine.madhouse-project.org/
of course that still eats up some of your server's power. I work for a web hosting company and frequently we'll just make a list of "bad bots" in an .htaccess file to block them. The server still has to reply to their requests but doesn't have to serve them any real data@cb I also use the .htaccess method to "block" specific agents, so they simply get thousands of 0 byte responses. Whenever it's a known LLM/AI scraper, I'm happy with that solution (and IP blocking ones that don't present a unique user agent).
I've heard of Iocane and similar tools but never looked into them, and I guess now is the time!
-
@redstrate Ah, this sounds promising! I don't want to make my site invisible on the greater Web by blocking all bot crawlers, but I'd be fine with them only loading URLs with no queries/parameters (anything after a ?). I'll look into that meta tag, though I acknowledge the other reply here that bots can happily ignore that.
The problem is that lots of crawlers do not respect robots.txt (especially those run by "AI" companies).
Thus people go for other solutions, to make it too expensive on the side of the crawler, like iocaine - https://firesphere.dev/articles/iocaine-the-deadliest-poison-known-to-ai, or anubis - https://anubis.techaro.lol
-
R relay@relay.mycrowd.ca shared this topic