Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. Incredible.

Incredible.

Scheduled Pinned Locked Moved Uncategorized
30 Posts 21 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • mhoye@cosocial.caM mhoye@cosocial.ca

    Incredible. Every second paragraph in this article is lunatic nonsense.

    One of the things I've long said about hiring is that you can always tell when you're talking to a junior dev who's going to be senior-staff or better someday. You can always tell when somebody was paying attention in the theory classes.

    But good god you can also tell when people missed that day in gradeschool when somebody slowly went over "So, what is a computer, really."

    archive.ph

    favicon

    (archive.ph)

    wordshaper@weatherishappening.networkW This user is from outside of this forum
    wordshaper@weatherishappening.networkW This user is from outside of this forum
    wordshaper@weatherishappening.network
    wrote last edited by
    #4

    @mhoye ...wow. That article was a whole thing and I admit I couldn't get past the halfway point before the stupid burned too much to tolerate. It's almost like someone was tasked with building a hypothetical "what is the most dumbass end-to-end company situation possible" scenario and then decided in the edits that they could actually make it worse.

    1 Reply Last reply
    0
    • mhoye@cosocial.caM mhoye@cosocial.ca

      Incredible. Every second paragraph in this article is lunatic nonsense.

      One of the things I've long said about hiring is that you can always tell when you're talking to a junior dev who's going to be senior-staff or better someday. You can always tell when somebody was paying attention in the theory classes.

      But good god you can also tell when people missed that day in gradeschool when somebody slowly went over "So, what is a computer, really."

      archive.ph

      favicon

      (archive.ph)

      mavnn@bonfire.mavnn.euM This user is from outside of this forum
      mavnn@bonfire.mavnn.euM This user is from outside of this forum
      mavnn@bonfire.mavnn.eu
      wrote last edited by
      #5

      @mhoye@cosocial.ca Did... did he write that entire article without taking any responsibility at all for what happened? Not even a "I thought I'd put something in place but I was wrong"?

      I... would probably sleep much better at night with that level of self awareness. And hurt the people around me a lot more.

      1 Reply Last reply
      0
      • mhoye@cosocial.caM mhoye@cosocial.ca

        Incredible. Every second paragraph in this article is lunatic nonsense.

        One of the things I've long said about hiring is that you can always tell when you're talking to a junior dev who's going to be senior-staff or better someday. You can always tell when somebody was paying attention in the theory classes.

        But good god you can also tell when people missed that day in gradeschool when somebody slowly went over "So, what is a computer, really."

        archive.ph

        favicon

        (archive.ph)

        scmbradley@mathstodon.xyzS This user is from outside of this forum
        scmbradley@mathstodon.xyzS This user is from outside of this forum
        scmbradley@mathstodon.xyz
        wrote last edited by
        #6

        @mhoye my god what a wild ride. The industry really is cooked isn't it

        1 Reply Last reply
        0
        • mhoye@cosocial.caM mhoye@cosocial.ca

          "The agent then, when asked to explain itself, produced a written confession..." um what

          "To execute the deletion, the agent went looking for an API token. It found one in a file completely unrelated to the task it was working on" went looking, found, what in the what

          "the same token had blanket authority across the entire Railway GraphQL API, including destructive operations" look, rookie what are you

          "That 1000% shouldn't be possible. We have evals for this" you have whaaaaaaaaaaaaa

          mhoye@cosocial.caM This user is from outside of this forum
          mhoye@cosocial.caM This user is from outside of this forum
          mhoye@cosocial.ca
          wrote last edited by
          #7

          "Railway stores volume-level backups in the same volume — a fact buried in their own documentation that says "wiping a volume deletes all backups" — those went with it" WHAT IN THE WHAT, your full stack jenga provider does WHAT with BACKUPS WHAT my sweet summer child I know that legal jargon can be perplexing and counterintuitive at times but I feel like we all sort of understand that the word "due" in "due dilligence" means "more than none."

          tito_swineflu@sfba.socialT mhoye@cosocial.caM 2 Replies Last reply
          0
          • mhoye@cosocial.caM mhoye@cosocial.ca

            "Railway stores volume-level backups in the same volume — a fact buried in their own documentation that says "wiping a volume deletes all backups" — those went with it" WHAT IN THE WHAT, your full stack jenga provider does WHAT with BACKUPS WHAT my sweet summer child I know that legal jargon can be perplexing and counterintuitive at times but I feel like we all sort of understand that the word "due" in "due dilligence" means "more than none."

            tito_swineflu@sfba.socialT This user is from outside of this forum
            tito_swineflu@sfba.socialT This user is from outside of this forum
            tito_swineflu@sfba.social
            wrote last edited by
            #8

            @mhoye I love that the first line in "What needs to change" isn't, "We should not let non-deterministic programs have free range across our systems"

            dalias@hachyderm.ioD 1 Reply Last reply
            0
            • mhoye@cosocial.caM mhoye@cosocial.ca

              Incredible. Every second paragraph in this article is lunatic nonsense.

              One of the things I've long said about hiring is that you can always tell when you're talking to a junior dev who's going to be senior-staff or better someday. You can always tell when somebody was paying attention in the theory classes.

              But good god you can also tell when people missed that day in gradeschool when somebody slowly went over "So, what is a computer, really."

              archive.ph

              favicon

              (archive.ph)

              petko@social.petko.meP This user is from outside of this forum
              petko@social.petko.meP This user is from outside of this forum
              petko@social.petko.me
              wrote last edited by
              #9

              @mhoye who the f publishes articles on that site...

              It was rhetorical... AI bros do... Of course AI bros do...

              1 Reply Last reply
              0
              • mhoye@cosocial.caM mhoye@cosocial.ca

                "Railway stores volume-level backups in the same volume — a fact buried in their own documentation that says "wiping a volume deletes all backups" — those went with it" WHAT IN THE WHAT, your full stack jenga provider does WHAT with BACKUPS WHAT my sweet summer child I know that legal jargon can be perplexing and counterintuitive at times but I feel like we all sort of understand that the word "due" in "due dilligence" means "more than none."

                mhoye@cosocial.caM This user is from outside of this forum
                mhoye@cosocial.caM This user is from outside of this forum
                mhoye@cosocial.ca
                wrote last edited by
                #10

                "The agent itself enumerates the safety rules it was given and admits to violating every one. This is not me speculating about agent failure modes. This is the agent on the record, in writing.

                The "system rules" the agent is referring to are consistent with Cursor's documented system-prompt language and our project rules for this codebase. Both safeguards failed simultaneously."

                What do you think is happening here? You know it's called a "language model", right? Did you ever wonder... why?

                adamshostack@infosec.exchangeA darkling@mstdn.socialD mhoye@cosocial.caM 3 Replies Last reply
                0
                • tito_swineflu@sfba.socialT tito_swineflu@sfba.social

                  @mhoye I love that the first line in "What needs to change" isn't, "We should not let non-deterministic programs have free range across our systems"

                  dalias@hachyderm.ioD This user is from outside of this forum
                  dalias@hachyderm.ioD This user is from outside of this forum
                  dalias@hachyderm.io
                  wrote last edited by
                  #11

                  @tito_swineflu @mhoye It's clowns all the way down.

                  1 Reply Last reply
                  0
                  • mhoye@cosocial.caM mhoye@cosocial.ca

                    "The agent then, when asked to explain itself, produced a written confession..." um what

                    "To execute the deletion, the agent went looking for an API token. It found one in a file completely unrelated to the task it was working on" went looking, found, what in the what

                    "the same token had blanket authority across the entire Railway GraphQL API, including destructive operations" look, rookie what are you

                    "That 1000% shouldn't be possible. We have evals for this" you have whaaaaaaaaaaaaa

                    adamshostack@infosec.exchangeA This user is from outside of this forum
                    adamshostack@infosec.exchangeA This user is from outside of this forum
                    adamshostack@infosec.exchange
                    wrote last edited by
                    #12

                    @mhoye I'm so glad that the "written confession" can't itself be hallucinated. That's a nice feature!

                    henryk@chaos.socialH 1 Reply Last reply
                    0
                    • mhoye@cosocial.caM mhoye@cosocial.ca

                      "The agent itself enumerates the safety rules it was given and admits to violating every one. This is not me speculating about agent failure modes. This is the agent on the record, in writing.

                      The "system rules" the agent is referring to are consistent with Cursor's documented system-prompt language and our project rules for this codebase. Both safeguards failed simultaneously."

                      What do you think is happening here? You know it's called a "language model", right? Did you ever wonder... why?

                      adamshostack@infosec.exchangeA This user is from outside of this forum
                      adamshostack@infosec.exchangeA This user is from outside of this forum
                      adamshostack@infosec.exchange
                      wrote last edited by
                      #13

                      @mhoye If only someone could invent some sort of, I dunno, approach or something that giving a single process all the power? authority? capabilities? privilege? was a bad thing, and we should go for less, not more.

                      1 Reply Last reply
                      0
                      • mhoye@cosocial.caM mhoye@cosocial.ca

                        "The agent then, when asked to explain itself, produced a written confession..." um what

                        "To execute the deletion, the agent went looking for an API token. It found one in a file completely unrelated to the task it was working on" went looking, found, what in the what

                        "the same token had blanket authority across the entire Railway GraphQL API, including destructive operations" look, rookie what are you

                        "That 1000% shouldn't be possible. We have evals for this" you have whaaaaaaaaaaaaa

                        sempf@infosec.exchangeS This user is from outside of this forum
                        sempf@infosec.exchangeS This user is from outside of this forum
                        sempf@infosec.exchange
                        wrote last edited by
                        #14

                        @mhoye There's a whole lotta YOLO in that story.

                        1 Reply Last reply
                        0
                        • mhoye@cosocial.caM mhoye@cosocial.ca

                          "The agent then, when asked to explain itself, produced a written confession..." um what

                          "To execute the deletion, the agent went looking for an API token. It found one in a file completely unrelated to the task it was working on" went looking, found, what in the what

                          "the same token had blanket authority across the entire Railway GraphQL API, including destructive operations" look, rookie what are you

                          "That 1000% shouldn't be possible. We have evals for this" you have whaaaaaaaaaaaaa

                          phred@weirder.earthP This user is from outside of this forum
                          phred@weirder.earthP This user is from outside of this forum
                          phred@weirder.earth
                          wrote last edited by
                          #15

                          @mhoye kek, I don't even need an LLM to accidentally all my Rails data. Many cycles ago, I ran wget --recursive against my cool little dev site, and didn't realize that it would also follow the "delete" links for all of the products I just entered. Bye bye data 🙃

                          1 Reply Last reply
                          0
                          • mhoye@cosocial.caM mhoye@cosocial.ca

                            "The agent then, when asked to explain itself, produced a written confession..." um what

                            "To execute the deletion, the agent went looking for an API token. It found one in a file completely unrelated to the task it was working on" went looking, found, what in the what

                            "the same token had blanket authority across the entire Railway GraphQL API, including destructive operations" look, rookie what are you

                            "That 1000% shouldn't be possible. We have evals for this" you have whaaaaaaaaaaaaa

                            slothrop@chaos.socialS This user is from outside of this forum
                            slothrop@chaos.socialS This user is from outside of this forum
                            slothrop@chaos.social
                            wrote last edited by
                            #16

                            @mhoye I’m so glad I didn’t study computer science, when that sort of knowledge clearly is no longer needed to run a software business

                            1 Reply Last reply
                            0
                            • mhoye@cosocial.caM mhoye@cosocial.ca

                              "The agent itself enumerates the safety rules it was given and admits to violating every one. This is not me speculating about agent failure modes. This is the agent on the record, in writing.

                              The "system rules" the agent is referring to are consistent with Cursor's documented system-prompt language and our project rules for this codebase. Both safeguards failed simultaneously."

                              What do you think is happening here? You know it's called a "language model", right? Did you ever wonder... why?

                              darkling@mstdn.socialD This user is from outside of this forum
                              darkling@mstdn.socialD This user is from outside of this forum
                              darkling@mstdn.social
                              wrote last edited by
                              #17

                              @mhoye That first paragraph: "This is the agent on record, in writing."

                              and herein lies the root of the failure: they actually believe that this is some sort of diagnostic, rather than just filling in a plausible response based on the question.

                              1 Reply Last reply
                              0
                              • adamshostack@infosec.exchangeA adamshostack@infosec.exchange

                                @mhoye I'm so glad that the "written confession" can't itself be hallucinated. That's a nice feature!

                                henryk@chaos.socialH This user is from outside of this forum
                                henryk@chaos.socialH This user is from outside of this forum
                                henryk@chaos.social
                                wrote last edited by
                                #18

                                @adamshostack @mhoye I'm confused. I had to check the date. I am *very* sure I read the "the LLM deleted my prod and when confronted, it confessed!" story before. Roughly 6 months ago, maybe a year.

                                Ahh, here it is: https://www.theregister.com/2025/07/21/replit_saastr_vibe_coding_incident/

                                fcbsd@hachyderm.ioF vollkorn@chaos.socialV 2 Replies Last reply
                                0
                                • mhoye@cosocial.caM mhoye@cosocial.ca

                                  "The agent itself enumerates the safety rules it was given and admits to violating every one. This is not me speculating about agent failure modes. This is the agent on the record, in writing.

                                  The "system rules" the agent is referring to are consistent with Cursor's documented system-prompt language and our project rules for this codebase. Both safeguards failed simultaneously."

                                  What do you think is happening here? You know it's called a "language model", right? Did you ever wonder... why?

                                  mhoye@cosocial.caM This user is from outside of this forum
                                  mhoye@cosocial.caM This user is from outside of this forum
                                  mhoye@cosocial.ca
                                  wrote last edited by
                                  #19

                                  But my favourite part of this, bar none, is how it's everyone else's fault.

                                  It's Cursor's fault, Railway's fault, maybe even Anthropic's fault, someone's gonna hear from my lawyer.

                                  The CEO of a company running a stochastic stack without access control, data hygiene or backups is blameless and powerless. That's AI's real selling point, after all: It's Not My Fault As A Service.

                                  "This isn't a story about one bad agent or one bad API. It's about an entire industry ..."

                                  Or, maybe it's you.

                                  mhoye@cosocial.caM 1 Reply Last reply
                                  0
                                  • mhoye@cosocial.caM mhoye@cosocial.ca

                                    Incredible. Every second paragraph in this article is lunatic nonsense.

                                    One of the things I've long said about hiring is that you can always tell when you're talking to a junior dev who's going to be senior-staff or better someday. You can always tell when somebody was paying attention in the theory classes.

                                    But good god you can also tell when people missed that day in gradeschool when somebody slowly went over "So, what is a computer, really."

                                    archive.ph

                                    favicon

                                    (archive.ph)

                                    curtosis@lingo.lolC This user is from outside of this forum
                                    curtosis@lingo.lolC This user is from outside of this forum
                                    curtosis@lingo.lol
                                    wrote last edited by
                                    #20

                                    @mhoye I fear that the big enterprise takeaway from this story will be “our controls and guardrails are much better than that”.

                                    1 Reply Last reply
                                    0
                                    • mhoye@cosocial.caM mhoye@cosocial.ca

                                      Incredible. Every second paragraph in this article is lunatic nonsense.

                                      One of the things I've long said about hiring is that you can always tell when you're talking to a junior dev who's going to be senior-staff or better someday. You can always tell when somebody was paying attention in the theory classes.

                                      But good god you can also tell when people missed that day in gradeschool when somebody slowly went over "So, what is a computer, really."

                                      archive.ph

                                      favicon

                                      (archive.ph)

                                      henryk@chaos.socialH This user is from outside of this forum
                                      henryk@chaos.socialH This user is from outside of this forum
                                      henryk@chaos.social
                                      wrote last edited by
                                      #21

                                      @mhoye Don't worry, I'm pretty sure the text is extruded, too. I've never seen a "The pattern is clear." in a context like this on human text, but am encountering it unreasonably often in LLM generated text.

                                      damonwakes@mastodon.sdf.orgD 1 Reply Last reply
                                      0
                                      • mhoye@cosocial.caM mhoye@cosocial.ca

                                        But my favourite part of this, bar none, is how it's everyone else's fault.

                                        It's Cursor's fault, Railway's fault, maybe even Anthropic's fault, someone's gonna hear from my lawyer.

                                        The CEO of a company running a stochastic stack without access control, data hygiene or backups is blameless and powerless. That's AI's real selling point, after all: It's Not My Fault As A Service.

                                        "This isn't a story about one bad agent or one bad API. It's about an entire industry ..."

                                        Or, maybe it's you.

                                        mhoye@cosocial.caM This user is from outside of this forum
                                        mhoye@cosocial.caM This user is from outside of this forum
                                        mhoye@cosocial.ca
                                        wrote last edited by
                                        #22

                                        I wrote the words "I confess, I did it, I take full responsibility" on a piece of paper. I was ready to turn myself in, to atone for my crimes. But then I put that piece of paper in a photocopier, and when I pressed the green button I learned something amazing. And what a weight off my conscience! The only question was, how did the photocopier manage to poison the Widow Bentley, drive over Baron Grimald, push the Duchess of Lockley out the balcony window and still manage to frame the butler?

                                        mhoye@cosocial.caM 1 Reply Last reply
                                        0
                                        • henryk@chaos.socialH henryk@chaos.social

                                          @mhoye Don't worry, I'm pretty sure the text is extruded, too. I've never seen a "The pattern is clear." in a context like this on human text, but am encountering it unreasonably often in LLM generated text.

                                          damonwakes@mastodon.sdf.orgD This user is from outside of this forum
                                          damonwakes@mastodon.sdf.orgD This user is from outside of this forum
                                          damonwakes@mastodon.sdf.org
                                          wrote last edited by
                                          #23

                                          @henryk @mhoye It's not opening on my device, but the "This isn't a story about one bad agent or one bad API. It's about an entire industry ..." quoted above already had my slop sense tingling.

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups