Pleased to share a page and explainer for the AI tarpit project Science is Poetry, with legal statement, rationale(s), and a few deployment notes:
-
Ye gads it's gone absolutely silly.
I spent a good part of my morning trying to work out if it was a veiled DoS or actual harvesting while keeping the thing up. Status codes are good, 96.5% are real page reads from the usual AI crawler suspects.
A big network in Singapore with "www.google.com" (but not GoogleBot) User Agent string is responsible for some of it. But the rest is just frantic feeding.
Server is running hot. To keep it up I'm having to further tune ratelimiting, bursts etc.
I've added these kindly donated new domains to the ridiculous landing page at https://scienceispoetry.net/
- poesie.kornshell.xyz
- whatthefuckisgoingonwithmyhorroscope.today
- poetry.danielarmengol.com
- poetry.usolab.com
- poetry.pinchito.com
- poetry.interactionphilia.com -
@malte I'm using `rhit` from Dystroy. The stock amd64 binary doesn't come with SHA sum, so you may want to use their repo or download and inspect somewhere safe first.
@JulianOliver ah yes! dystroy's tools are so cool!!!

(i forgot about rhit because it's not in the debian repos, yet
) -
I've added these kindly donated new domains to the ridiculous landing page at https://scienceispoetry.net/
- poesie.kornshell.xyz
- whatthefuckisgoingonwithmyhorroscope.today
- poetry.danielarmengol.com
- poetry.usolab.com
- poetry.pinchito.com
- poetry.interactionphilia.comI've done the log analysis and the two biggest contributors that brought the AI crawler hits up to 2 million in a day, a 4x increase on a week prior, are ByteSpider (Singapore networks) and especially AppleBot (used for Siri and other Apple products).
The parasites.txt is now >4500 lines long:
-
I've added these kindly donated new domains to the ridiculous landing page at https://scienceispoetry.net/
- poesie.kornshell.xyz
- whatthefuckisgoingonwithmyhorroscope.today
- poetry.danielarmengol.com
- poetry.usolab.com
- poetry.pinchito.com
- poetry.interactionphilia.com@JulianOliver For me, this text suggests the informational equivalent of "window"—fluttering strips of reflective chaff, intended to attract attention and confuse.
-
I've done the log analysis and the two biggest contributors that brought the AI crawler hits up to 2 million in a day, a 4x increase on a week prior, are ByteSpider (Singapore networks) and especially AppleBot (used for Siri and other Apple products).
The parasites.txt is now >4500 lines long:
@JulianOliver Interesting. I did not expect Apple to start showing up in this rogues gallery.
-
@JulianOliver Interesting. I did not expect Apple to start showing up in this rogues gallery.
@gregsted They are throwing a lot of cycles at it, a swarm of ~2000 individual endpoints. Nearly 2 days of furious feeding now. I'm very surprised. I don't know why so much is being spent on this content; why there is no human oversight, to then just pull the plug on their end
-
Ye gads it's gone absolutely silly.
I spent a good part of my morning trying to work out if it was a veiled DoS or actual harvesting while keeping the thing up. Status codes are good, 96.5% are real page reads from the usual AI crawler suspects.
A big network in Singapore with "www.google.com" (but not GoogleBot) User Agent string is responsible for some of it. But the rest is just frantic feeding.
Server is running hot. To keep it up I'm having to further tune ratelimiting, bursts etc.
@JulianOliver Why are they feeding so aggressively? There isn't that much for them to feed on, is there?
-
@JulianOliver Why are they feeding so aggressively? There isn't that much for them to feed on, is there?
@Feral_3D It's endless and randomly generated, so they just keep going as long as there is an unread page, which is technically forever.
-
Do you have an unused domain that you would be happy to donate to a counter-offensive against unchecked & unregulated AI crawlers that scrape human-made content to simulate & deceive for profit?
If so, pls reply to this post. Your domain would become an entrypoint to the AI tarpit & Poison-as-a-Service project below, allowing concerned public to choose to use it on their sites, helping make the project more resilient to blacklisting.
@JulianOliver can offer subdomains on aidirtylist.info
-
Pleased to share a page and explainer for the AI tarpit project Science is Poetry, with legal statement, rationale(s), and a few deployment notes:
The page may grow a bit. Just wanted to get it out the door.
@JulianOliver Oh, that's nice!
-
@JulianOliver Wait, they are still this dumb? Don‘t get me wrong, I like the idea of your project. But I'd expect it to be detected and ignored –* at least by the bigger players. Especially with other projects like this (e.g. Nepenthes) being out for a while already.
Or maybe the detection happens once the content has been parsed? Can you see how many pages deep an individual crawler goes?
* yes, a handmade emdash.
@bastelwombat @JulianOliver They still flock to the iocaine instance on my server, they've been at it for some months...
-
@Feral_3D It's endless and randomly generated, so they just keep going as long as there is an unread page, which is technically forever.
@JulianOliver oh so it's like some kind of nonsense gravity well. Very cool.
-
@bastelwombat @JulianOliver They still flock to the iocaine instance on my server, they've been at it for some months...
@Numerfolt @bastelwombat That's great. Iocaine should have better retention than Nepenthes which drip feeds. Claude and gptbot soon tapered down & dropped off. Overall I had poor retention with the Markov pattern.
Do you have any stats as to hits, crawlers endpoints? I'm eager to learn of other experiences.
I personally was not prepared for how obscenely hungry these things are. Today alone with my tarpit it surpassed 2.5M hits, 83GB of traffic, across these endpoints: https://scienceispoetry.net/files/parasites.txt
-
@Numerfolt @bastelwombat That's great. Iocaine should have better retention than Nepenthes which drip feeds. Claude and gptbot soon tapered down & dropped off. Overall I had poor retention with the Markov pattern.
Do you have any stats as to hits, crawlers endpoints? I'm eager to learn of other experiences.
I personally was not prepared for how obscenely hungry these things are. Today alone with my tarpit it surpassed 2.5M hits, 83GB of traffic, across these endpoints: https://scienceispoetry.net/files/parasites.txt
@JulianOliver @bastelwombat Unfortunately I have no clue about how iocaine actually works under the hood, nor how many Bots I actually trap.
I have a dashboard in grafana that uses some stats, but I don't know how to properly read out the logfiles for endpoints, agents or retention. Would love to be able to do that tho...
I only have one endpoint and iirc it's only some megabytes of traffic per week. But I can look that up at least

-
@JulianOliver @bastelwombat Unfortunately I have no clue about how iocaine actually works under the hood, nor how many Bots I actually trap.
I have a dashboard in grafana that uses some stats, but I don't know how to properly read out the logfiles for endpoints, agents or retention. Would love to be able to do that tho...
I only have one endpoint and iirc it's only some megabytes of traffic per week. But I can look that up at least

@Numerfolt @bastelwombat Well, either way, you are clearly doing good work!
-
@Numerfolt @bastelwombat Well, either way, you are clearly doing good work!
-
@JulianOliver "parasites" is a great name for this
@netopwibby @JulianOliver just joking/inspired here, but "parasitoids" would be more telling, if referred to the AI training companies:
while a parasite has a vested interest in the survival of the host, parasitoids just use the host/prey for one of their life phases, killing the host and moving out.
As AI is being embedded (with little or no possibility to opt-out) in all digital interactions, the open web can be bot-swarm-scraped to death to move to the next stage of exploitation, with direct feed from apps, wearables, and appliances.p.s. THANKS for fighting back, and THANKS for involving others in the fight!
-
Do you have an unused domain that you would be happy to donate to a counter-offensive against unchecked & unregulated AI crawlers that scrape human-made content to simulate & deceive for profit?
If so, pls reply to this post. Your domain would become an entrypoint to the AI tarpit & Poison-as-a-Service project below, allowing concerned public to choose to use it on their sites, helping make the project more resilient to blacklisting.
@JulianOliver hey, i've got a few domains. i see the info in other replies, so i'm happy to just hook them up.
let's start with:
science.commune.tel
scicene.emenel.ca -
Even faster now.
Again, these pages are randomly generated, and each line is a page request from a crawler.
To think of the energy expended at a global scale, the waste. All the money, water & minerals thrown at this. These AI companies are near DoS'ing the human web as they deep-sea trawl our content.
Computationally, infrastructurally, & culturally, it's an obscenity,
@JulianOliver this has been going on for at least a year now and I have never seen any reporting of it in the press.
Depressing to see that the Web as we knew it, and as used by billions, is actually disenfranchised, it doesn't exist as a valid interest.
Indeed like nature itself, a free "asset" to harvest to extinction.
-
R relay@relay.infosec.exchange shared this topic
