Incredible.
-
Incredible. Every second paragraph in this article is lunatic nonsense.
One of the things I've long said about hiring is that you can always tell when you're talking to a junior dev who's going to be senior-staff or better someday. You can always tell when somebody was paying attention in the theory classes.
But good god you can also tell when people missed that day in gradeschool when somebody slowly went over "So, what is a computer, really."
"The agent then, when asked to explain itself, produced a written confession..." um what
"To execute the deletion, the agent went looking for an API token. It found one in a file completely unrelated to the task it was working on" went looking, found, what in the what
"the same token had blanket authority across the entire Railway GraphQL API, including destructive operations" look, rookie what are you
"That 1000% shouldn't be possible. We have evals for this" you have whaaaaaaaaaaaaa
-
Incredible. Every second paragraph in this article is lunatic nonsense.
One of the things I've long said about hiring is that you can always tell when you're talking to a junior dev who's going to be senior-staff or better someday. You can always tell when somebody was paying attention in the theory classes.
But good god you can also tell when people missed that day in gradeschool when somebody slowly went over "So, what is a computer, really."
@mhoye ...wow. That article was a whole thing and I admit I couldn't get past the halfway point before the stupid burned too much to tolerate. It's almost like someone was tasked with building a hypothetical "what is the most dumbass end-to-end company situation possible" scenario and then decided in the edits that they could actually make it worse.
-
Incredible. Every second paragraph in this article is lunatic nonsense.
One of the things I've long said about hiring is that you can always tell when you're talking to a junior dev who's going to be senior-staff or better someday. You can always tell when somebody was paying attention in the theory classes.
But good god you can also tell when people missed that day in gradeschool when somebody slowly went over "So, what is a computer, really."
@mhoye@cosocial.ca Did... did he write that entire article without taking any responsibility at all for what happened? Not even a "I thought I'd put something in place but I was wrong"?
I... would probably sleep much better at night with that level of self awareness. And hurt the people around me a lot more.
-
Incredible. Every second paragraph in this article is lunatic nonsense.
One of the things I've long said about hiring is that you can always tell when you're talking to a junior dev who's going to be senior-staff or better someday. You can always tell when somebody was paying attention in the theory classes.
But good god you can also tell when people missed that day in gradeschool when somebody slowly went over "So, what is a computer, really."
@mhoye my god what a wild ride. The industry really is cooked isn't it
-
"The agent then, when asked to explain itself, produced a written confession..." um what
"To execute the deletion, the agent went looking for an API token. It found one in a file completely unrelated to the task it was working on" went looking, found, what in the what
"the same token had blanket authority across the entire Railway GraphQL API, including destructive operations" look, rookie what are you
"That 1000% shouldn't be possible. We have evals for this" you have whaaaaaaaaaaaaa
"Railway stores volume-level backups in the same volume — a fact buried in their own documentation that says "wiping a volume deletes all backups" — those went with it" WHAT IN THE WHAT, your full stack jenga provider does WHAT with BACKUPS WHAT my sweet summer child I know that legal jargon can be perplexing and counterintuitive at times but I feel like we all sort of understand that the word "due" in "due dilligence" means "more than none."
-
"Railway stores volume-level backups in the same volume — a fact buried in their own documentation that says "wiping a volume deletes all backups" — those went with it" WHAT IN THE WHAT, your full stack jenga provider does WHAT with BACKUPS WHAT my sweet summer child I know that legal jargon can be perplexing and counterintuitive at times but I feel like we all sort of understand that the word "due" in "due dilligence" means "more than none."
@mhoye I love that the first line in "What needs to change" isn't, "We should not let non-deterministic programs have free range across our systems"
-
Incredible. Every second paragraph in this article is lunatic nonsense.
One of the things I've long said about hiring is that you can always tell when you're talking to a junior dev who's going to be senior-staff or better someday. You can always tell when somebody was paying attention in the theory classes.
But good god you can also tell when people missed that day in gradeschool when somebody slowly went over "So, what is a computer, really."
@mhoye who the f publishes articles on that site...
It was rhetorical... AI bros do... Of course AI bros do...
-
"Railway stores volume-level backups in the same volume — a fact buried in their own documentation that says "wiping a volume deletes all backups" — those went with it" WHAT IN THE WHAT, your full stack jenga provider does WHAT with BACKUPS WHAT my sweet summer child I know that legal jargon can be perplexing and counterintuitive at times but I feel like we all sort of understand that the word "due" in "due dilligence" means "more than none."
"The agent itself enumerates the safety rules it was given and admits to violating every one. This is not me speculating about agent failure modes. This is the agent on the record, in writing.
The "system rules" the agent is referring to are consistent with Cursor's documented system-prompt language and our project rules for this codebase. Both safeguards failed simultaneously."
What do you think is happening here? You know it's called a "language model", right? Did you ever wonder... why?
-
@mhoye I love that the first line in "What needs to change" isn't, "We should not let non-deterministic programs have free range across our systems"
@tito_swineflu @mhoye It's clowns all the way down.
-
"The agent then, when asked to explain itself, produced a written confession..." um what
"To execute the deletion, the agent went looking for an API token. It found one in a file completely unrelated to the task it was working on" went looking, found, what in the what
"the same token had blanket authority across the entire Railway GraphQL API, including destructive operations" look, rookie what are you
"That 1000% shouldn't be possible. We have evals for this" you have whaaaaaaaaaaaaa
@mhoye I'm so glad that the "written confession" can't itself be hallucinated. That's a nice feature!
-
"The agent itself enumerates the safety rules it was given and admits to violating every one. This is not me speculating about agent failure modes. This is the agent on the record, in writing.
The "system rules" the agent is referring to are consistent with Cursor's documented system-prompt language and our project rules for this codebase. Both safeguards failed simultaneously."
What do you think is happening here? You know it's called a "language model", right? Did you ever wonder... why?
@mhoye If only someone could invent some sort of, I dunno, approach or something that giving a single process all the power? authority? capabilities? privilege? was a bad thing, and we should go for less, not more.
-
"The agent then, when asked to explain itself, produced a written confession..." um what
"To execute the deletion, the agent went looking for an API token. It found one in a file completely unrelated to the task it was working on" went looking, found, what in the what
"the same token had blanket authority across the entire Railway GraphQL API, including destructive operations" look, rookie what are you
"That 1000% shouldn't be possible. We have evals for this" you have whaaaaaaaaaaaaa
@mhoye There's a whole lotta YOLO in that story.
-
"The agent then, when asked to explain itself, produced a written confession..." um what
"To execute the deletion, the agent went looking for an API token. It found one in a file completely unrelated to the task it was working on" went looking, found, what in the what
"the same token had blanket authority across the entire Railway GraphQL API, including destructive operations" look, rookie what are you
"That 1000% shouldn't be possible. We have evals for this" you have whaaaaaaaaaaaaa
@mhoye kek, I don't even need an LLM to accidentally all my Rails data. Many cycles ago, I ran wget --recursive against my cool little dev site, and didn't realize that it would also follow the "delete" links for all of the products I just entered. Bye bye data

-
"The agent then, when asked to explain itself, produced a written confession..." um what
"To execute the deletion, the agent went looking for an API token. It found one in a file completely unrelated to the task it was working on" went looking, found, what in the what
"the same token had blanket authority across the entire Railway GraphQL API, including destructive operations" look, rookie what are you
"That 1000% shouldn't be possible. We have evals for this" you have whaaaaaaaaaaaaa
@mhoye I’m so glad I didn’t study computer science, when that sort of knowledge clearly is no longer needed to run a software business
-
"The agent itself enumerates the safety rules it was given and admits to violating every one. This is not me speculating about agent failure modes. This is the agent on the record, in writing.
The "system rules" the agent is referring to are consistent with Cursor's documented system-prompt language and our project rules for this codebase. Both safeguards failed simultaneously."
What do you think is happening here? You know it's called a "language model", right? Did you ever wonder... why?
@mhoye That first paragraph: "This is the agent on record, in writing."
and herein lies the root of the failure: they actually believe that this is some sort of diagnostic, rather than just filling in a plausible response based on the question.
-
@mhoye I'm so glad that the "written confession" can't itself be hallucinated. That's a nice feature!
@adamshostack @mhoye I'm confused. I had to check the date. I am *very* sure I read the "the LLM deleted my prod and when confronted, it confessed!" story before. Roughly 6 months ago, maybe a year.
Ahh, here it is: https://www.theregister.com/2025/07/21/replit_saastr_vibe_coding_incident/
-
"The agent itself enumerates the safety rules it was given and admits to violating every one. This is not me speculating about agent failure modes. This is the agent on the record, in writing.
The "system rules" the agent is referring to are consistent with Cursor's documented system-prompt language and our project rules for this codebase. Both safeguards failed simultaneously."
What do you think is happening here? You know it's called a "language model", right? Did you ever wonder... why?
But my favourite part of this, bar none, is how it's everyone else's fault.
It's Cursor's fault, Railway's fault, maybe even Anthropic's fault, someone's gonna hear from my lawyer.
The CEO of a company running a stochastic stack without access control, data hygiene or backups is blameless and powerless. That's AI's real selling point, after all: It's Not My Fault As A Service.
"This isn't a story about one bad agent or one bad API. It's about an entire industry ..."
Or, maybe it's you.
-
Incredible. Every second paragraph in this article is lunatic nonsense.
One of the things I've long said about hiring is that you can always tell when you're talking to a junior dev who's going to be senior-staff or better someday. You can always tell when somebody was paying attention in the theory classes.
But good god you can also tell when people missed that day in gradeschool when somebody slowly went over "So, what is a computer, really."
@mhoye I fear that the big enterprise takeaway from this story will be “our controls and guardrails are much better than that”.
-
Incredible. Every second paragraph in this article is lunatic nonsense.
One of the things I've long said about hiring is that you can always tell when you're talking to a junior dev who's going to be senior-staff or better someday. You can always tell when somebody was paying attention in the theory classes.
But good god you can also tell when people missed that day in gradeschool when somebody slowly went over "So, what is a computer, really."
@mhoye Don't worry, I'm pretty sure the text is extruded, too. I've never seen a "The pattern is clear." in a context like this on human text, but am encountering it unreasonably often in LLM generated text.
-
But my favourite part of this, bar none, is how it's everyone else's fault.
It's Cursor's fault, Railway's fault, maybe even Anthropic's fault, someone's gonna hear from my lawyer.
The CEO of a company running a stochastic stack without access control, data hygiene or backups is blameless and powerless. That's AI's real selling point, after all: It's Not My Fault As A Service.
"This isn't a story about one bad agent or one bad API. It's about an entire industry ..."
Or, maybe it's you.
I wrote the words "I confess, I did it, I take full responsibility" on a piece of paper. I was ready to turn myself in, to atone for my crimes. But then I put that piece of paper in a photocopier, and when I pressed the green button I learned something amazing. And what a weight off my conscience! The only question was, how did the photocopier manage to poison the Widow Bentley, drive over Baron Grimald, push the Duchess of Lockley out the balcony window and still manage to frame the butler?