Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. Pleased to share a page and explainer for the AI tarpit project Science is Poetry, with legal statement, rationale(s), and a few deployment notes:

Pleased to share a page and explainer for the AI tarpit project Science is Poetry, with legal statement, rationale(s), and a few deployment notes:

Scheduled Pinned Locked Moved Uncategorized
bigtech
180 Posts 56 Posters 81 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • julianoliver@mastodon.socialJ julianoliver@mastodon.social

    Nearly a month later you would've thought that the crawlers would've given up by now, dropped off, blacklisted the IPs, or perhaps even the domains themselves.

    And yet no. As I tentatively guessed, thanks to your donated domains (and the people linking them in their sites) it has only grown.

    I don't expect it to run this hot for the long term, but yesterday's hit count (these are almost 100% reads of randomly generated pages by AI crawlers) was near 1M.

    Link Preview Image
    retech@corteximplant.comR This user is from outside of this forum
    retech@corteximplant.comR This user is from outside of this forum
    retech@corteximplant.com
    wrote last edited by
    #141

    @JulianOliver Damn, the bandwidth...

    1 Reply Last reply
    0
    • thgie@post.lurk.orgT thgie@post.lurk.org

      I honestly bought the domain on a whim, because I'm kind of fascinated by slime molds. I'm super happy it finds such useful application. Thanks for all your work, @JulianOliver!

      julianoliver@mastodon.socialJ This user is from outside of this forum
      julianoliver@mastodon.socialJ This user is from outside of this forum
      julianoliver@mastodon.social
      wrote last edited by
      #142

      @thgie Thanks for the kind words! I'm fascinated by slime molds too. The only kind I don't like comes from Silicon Valley.

      thgie@post.lurk.orgT 1 Reply Last reply
      0
      • julianoliver@mastodon.socialJ julianoliver@mastodon.social

        @thgie Thanks for the kind words! I'm fascinated by slime molds too. The only kind I don't like comes from Silicon Valley.

        thgie@post.lurk.orgT This user is from outside of this forum
        thgie@post.lurk.orgT This user is from outside of this forum
        thgie@post.lurk.org
        wrote last edited by
        #143

        Exactly, the dirty ones !

        @JulianOliver

        1 Reply Last reply
        0
        • julianoliver@mastodon.socialJ julianoliver@mastodon.social

          Nearly a month later you would've thought that the crawlers would've given up by now, dropped off, blacklisted the IPs, or perhaps even the domains themselves.

          And yet no. As I tentatively guessed, thanks to your donated domains (and the people linking them in their sites) it has only grown.

          I don't expect it to run this hot for the long term, but yesterday's hit count (these are almost 100% reads of randomly generated pages by AI crawlers) was near 1M.

          Link Preview Image
          julianoliver@mastodon.socialJ This user is from outside of this forum
          julianoliver@mastodon.socialJ This user is from outside of this forum
          julianoliver@mastodon.social
          wrote last edited by
          #144

          For any naysayers out there as to how effective all this is, or could be, some recent research shows you can do a lot with a little:

          Link Preview Image
          Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples

          Abstract page for arXiv paper 2510.07192: Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples

          favicon

          arXiv.org (arxiv.org)

          Researchers found that a very small corpora of poison content has largely the same impact, regardless of the size of the data in the model itself:

          "We find that 250 poisoned documents similarly compromise models across all model and dataset sizes, despite the largest models training on more than 20 times more clean data."

          feral_3d@mastodon.socialF liebach@mastodon.artL mgiraldo@mstdn.socialM julianoliver@mastodon.socialJ 4 Replies Last reply
          0
          • julianoliver@mastodon.socialJ julianoliver@mastodon.social

            For any naysayers out there as to how effective all this is, or could be, some recent research shows you can do a lot with a little:

            Link Preview Image
            Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples

            Abstract page for arXiv paper 2510.07192: Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples

            favicon

            arXiv.org (arxiv.org)

            Researchers found that a very small corpora of poison content has largely the same impact, regardless of the size of the data in the model itself:

            "We find that 250 poisoned documents similarly compromise models across all model and dataset sizes, despite the largest models training on more than 20 times more clean data."

            feral_3d@mastodon.socialF This user is from outside of this forum
            feral_3d@mastodon.socialF This user is from outside of this forum
            feral_3d@mastodon.social
            wrote last edited by
            #145

            @JulianOliver oh dang! I kinda love that this is so effective, whereas other methods are completely appropriate. Training season for data is a monopoly, where we to endgender and respect alternatives, industry leaders would find a meaningful new paradigm.

            1 Reply Last reply
            0
            • julianoliver@mastodon.socialJ This user is from outside of this forum
              julianoliver@mastodon.socialJ This user is from outside of this forum
              julianoliver@mastodon.social
              wrote last edited by
              #146

              @perhammer Thank you for yours! I will add your domain tomorrow at UTC midnight.

              If you are up for offering other domains to the cause, that is very kind and good. I'll surely take them. And yes, exactly the same records.

              I may spin up servers under other IPs in future, and spread the donated domains across them. For now, given the insane volume of traffic, there's evidently no need.

              1 Reply Last reply
              0
              • julianoliver@mastodon.socialJ This user is from outside of this forum
                julianoliver@mastodon.socialJ This user is from outside of this forum
                julianoliver@mastodon.social
                wrote last edited by
                #147

                @perhammer Ah such great domains, thank you! I'll report back once done, for you to liberally link.

                1 Reply Last reply
                0
                • julianoliver@mastodon.socialJ julianoliver@mastodon.social

                  For any naysayers out there as to how effective all this is, or could be, some recent research shows you can do a lot with a little:

                  Link Preview Image
                  Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples

                  Abstract page for arXiv paper 2510.07192: Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples

                  favicon

                  arXiv.org (arxiv.org)

                  Researchers found that a very small corpora of poison content has largely the same impact, regardless of the size of the data in the model itself:

                  "We find that 250 poisoned documents similarly compromise models across all model and dataset sizes, despite the largest models training on more than 20 times more clean data."

                  liebach@mastodon.artL This user is from outside of this forum
                  liebach@mastodon.artL This user is from outside of this forum
                  liebach@mastodon.art
                  wrote last edited by
                  #148

                  @JulianOliver Heartwarming, inspiring.

                  1 Reply Last reply
                  0
                  • julianoliver@mastodon.socialJ julianoliver@mastodon.social

                    It's approaching DoS at this point. This just one of the VMs, and just OpenAI's parasite.

                    Threading's holding up but need some more tuning of rate limits and burst. Trying sending 429's now to ask them to play nice.

                    To think the www was built for people.

                    And here we are

                    bastelwombat@chaos.socialB This user is from outside of this forum
                    bastelwombat@chaos.socialB This user is from outside of this forum
                    bastelwombat@chaos.social
                    wrote last edited by
                    #149

                    @JulianOliver Wait, they are still this dumb? Don‘t get me wrong, I like the idea of your project. But I'd expect it to be detected and ignored –* at least by the bigger players. Especially with other projects like this (e.g. Nepenthes) being out for a while already.

                    Or maybe the detection happens once the content has been parsed? Can you see how many pages deep an individual crawler goes?

                    * yes, a handmade emdash.

                    julianoliver@mastodon.socialJ numerfolt@kirche.socialN 2 Replies Last reply
                    0
                    • bastelwombat@chaos.socialB bastelwombat@chaos.social

                      @JulianOliver Wait, they are still this dumb? Don‘t get me wrong, I like the idea of your project. But I'd expect it to be detected and ignored –* at least by the bigger players. Especially with other projects like this (e.g. Nepenthes) being out for a while already.

                      Or maybe the detection happens once the content has been parsed? Can you see how many pages deep an individual crawler goes?

                      * yes, a handmade emdash.

                      julianoliver@mastodon.socialJ This user is from outside of this forum
                      julianoliver@mastodon.socialJ This user is from outside of this forum
                      julianoliver@mastodon.social
                      wrote last edited by
                      #150

                      @bastelwombat

                      Yesterday's hit count for this project was nearly 1M unique page reads, a tiny proportion (<1%) from humans..

                      I trialed the great Nepenthes quite extensively and it was good at hooking but not holding crawlers, not in 2026, as I explain on the project page. Today the big AI crawlers seemingly lose interest in Markov, tire of drip-fed content, & prefer a non dictionary corpus, as they seek content akin to how we humans communicate (typos, made up words, ad hoc emphasis etc).

                      1 Reply Last reply
                      0
                      • julianoliver@mastodon.socialJ julianoliver@mastodon.social

                        For any naysayers out there as to how effective all this is, or could be, some recent research shows you can do a lot with a little:

                        Link Preview Image
                        Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples

                        Abstract page for arXiv paper 2510.07192: Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples

                        favicon

                        arXiv.org (arxiv.org)

                        Researchers found that a very small corpora of poison content has largely the same impact, regardless of the size of the data in the model itself:

                        "We find that 250 poisoned documents similarly compromise models across all model and dataset sizes, despite the largest models training on more than 20 times more clean data."

                        mgiraldo@mstdn.socialM This user is from outside of this forum
                        mgiraldo@mstdn.socialM This user is from outside of this forum
                        mgiraldo@mstdn.social
                        wrote last edited by
                        #151

                        @JulianOliver is random data sufficiently poisonous?

                        julianoliver@mastodon.socialJ 1 Reply Last reply
                        0
                        • mgiraldo@mstdn.socialM mgiraldo@mstdn.social

                          @JulianOliver is random data sufficiently poisonous?

                          julianoliver@mastodon.socialJ This user is from outside of this forum
                          julianoliver@mastodon.socialJ This user is from outside of this forum
                          julianoliver@mastodon.social
                          wrote last edited by
                          #152

                          @mgiraldo Answering that in earnest would require knowing more than I do about the unique model training approaches of each LLM. As a guess it may not be as poisonous as Markov content from well know corpuses like popular books, or famous papers. However some of the bigger bots seem good at detecting this, and so drop-off anyway. I had poor retention results this way.

                          There may be references, faux terms & partials in randomly produced sentences that could sneak in to training datasets.

                          mgiraldo@mstdn.socialM 1 Reply Last reply
                          0
                          • smn@l3ib.orgS smn@l3ib.org

                            @JulianOliver done. whatthefuckisgoingonwithmyhorroscope.today now has those records, at least until the domain expires on April 27 2027

                            julianoliver@mastodon.socialJ This user is from outside of this forum
                            julianoliver@mastodon.socialJ This user is from outside of this forum
                            julianoliver@mastodon.social
                            wrote last edited by
                            #153

                            @smn You're live!

                            1 Reply Last reply
                            0
                            • julianoliver@mastodon.socialJ julianoliver@mastodon.social

                              @mgiraldo Answering that in earnest would require knowing more than I do about the unique model training approaches of each LLM. As a guess it may not be as poisonous as Markov content from well know corpuses like popular books, or famous papers. However some of the bigger bots seem good at detecting this, and so drop-off anyway. I had poor retention results this way.

                              There may be references, faux terms & partials in randomly produced sentences that could sneak in to training datasets.

                              mgiraldo@mstdn.socialM This user is from outside of this forum
                              mgiraldo@mstdn.socialM This user is from outside of this forum
                              mgiraldo@mstdn.social
                              wrote last edited by
                              #154

                              @JulianOliver however many poison pills you can introduce are a service to humanity 🫡

                              1 Reply Last reply
                              0
                              • julianoliver@mastodon.socialJ julianoliver@mastodon.social

                                For any naysayers out there as to how effective all this is, or could be, some recent research shows you can do a lot with a little:

                                Link Preview Image
                                Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples

                                Abstract page for arXiv paper 2510.07192: Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples

                                favicon

                                arXiv.org (arxiv.org)

                                Researchers found that a very small corpora of poison content has largely the same impact, regardless of the size of the data in the model itself:

                                "We find that 250 poisoned documents similarly compromise models across all model and dataset sizes, despite the largest models training on more than 20 times more clean data."

                                julianoliver@mastodon.socialJ This user is from outside of this forum
                                julianoliver@mastodon.socialJ This user is from outside of this forum
                                julianoliver@mastodon.social
                                wrote last edited by
                                #155

                                Ye gads it's gone absolutely silly.

                                I spent a good part of my morning trying to work out if it was a veiled DoS or actual harvesting while keeping the thing up. Status codes are good, 96.5% are real page reads from the usual AI crawler suspects.

                                A big network in Singapore with "www.google.com" (but not GoogleBot) User Agent string is responsible for some of it. But the rest is just frantic feeding.

                                Server is running hot. To keep it up I'm having to further tune ratelimiting, bursts etc.

                                Link Preview Image
                                alex27@infosec.exchangeA malte@anticapitalist.partyM julianoliver@mastodon.socialJ feral_3d@mastodon.socialF 4 Replies Last reply
                                0
                                • julianoliver@mastodon.socialJ julianoliver@mastodon.social

                                  Ye gads it's gone absolutely silly.

                                  I spent a good part of my morning trying to work out if it was a veiled DoS or actual harvesting while keeping the thing up. Status codes are good, 96.5% are real page reads from the usual AI crawler suspects.

                                  A big network in Singapore with "www.google.com" (but not GoogleBot) User Agent string is responsible for some of it. But the rest is just frantic feeding.

                                  Server is running hot. To keep it up I'm having to further tune ratelimiting, bursts etc.

                                  Link Preview Image
                                  alex27@infosec.exchangeA This user is from outside of this forum
                                  alex27@infosec.exchangeA This user is from outside of this forum
                                  alex27@infosec.exchange
                                  wrote last edited by
                                  #156

                                  @JulianOliver hey, is that ok to leave a link to science poetry from some of my pages?

                                  julianoliver@mastodon.socialJ 1 Reply Last reply
                                  0
                                  • alex27@infosec.exchangeA alex27@infosec.exchange

                                    @JulianOliver hey, is that ok to leave a link to science poetry from some of my pages?

                                    julianoliver@mastodon.socialJ This user is from outside of this forum
                                    julianoliver@mastodon.socialJ This user is from outside of this forum
                                    julianoliver@mastodon.social
                                    wrote last edited by
                                    #157

                                    @alex27 Please do, that's what it's there for!

                                    alex27@infosec.exchangeA 1 Reply Last reply
                                    0
                                    • julianoliver@mastodon.socialJ julianoliver@mastodon.social

                                      @alex27 Please do, that's what it's there for!

                                      alex27@infosec.exchangeA This user is from outside of this forum
                                      alex27@infosec.exchangeA This user is from outside of this forum
                                      alex27@infosec.exchange
                                      wrote last edited by
                                      #158

                                      @JulianOliver thanks! Asking since it's not clear to what extent system is operational and rather there are problems with performance so far. Didn't want to put the last straw.

                                      julianoliver@mastodon.socialJ 1 Reply Last reply
                                      0
                                      • alex27@infosec.exchangeA alex27@infosec.exchange

                                        @JulianOliver thanks! Asking since it's not clear to what extent system is operational and rather there are problems with performance so far. Didn't want to put the last straw.

                                        julianoliver@mastodon.socialJ This user is from outside of this forum
                                        julianoliver@mastodon.socialJ This user is from outside of this forum
                                        julianoliver@mastodon.social
                                        wrote last edited by
                                        #159

                                        @alex27 Fully operational yes, thanks for asking. The system is under a lot of load but still has some room. I will tune so it can serve even more if it needs to.

                                        1 Reply Last reply
                                        0
                                        • julianoliver@mastodon.socialJ julianoliver@mastodon.social

                                          Ye gads it's gone absolutely silly.

                                          I spent a good part of my morning trying to work out if it was a veiled DoS or actual harvesting while keeping the thing up. Status codes are good, 96.5% are real page reads from the usual AI crawler suspects.

                                          A big network in Singapore with "www.google.com" (but not GoogleBot) User Agent string is responsible for some of it. But the rest is just frantic feeding.

                                          Server is running hot. To keep it up I'm having to further tune ratelimiting, bursts etc.

                                          Link Preview Image
                                          malte@anticapitalist.partyM This user is from outside of this forum
                                          malte@anticapitalist.partyM This user is from outside of this forum
                                          malte@anticapitalist.party
                                          wrote last edited by
                                          #160

                                          @JulianOliver you probably wrote it somewhere, but i can't find: what's the tool for visualizing the log output?

                                          julianoliver@mastodon.socialJ 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups