lol oh my god i feel **so fucking smug** right now, it's incredible.
-
RE: https://mstdn.social/@hkrn/116284264915152671
lol oh my god i feel **so fucking smug** right now, it's incredible. my whole body is tingling.
i was using this package in one of my projects. i found it had a bug, and when i went to maybe try to make a contribution to the open source repository, i found it to be a huge shitpile of vibe-coded mess. methods that were thousands of lines long with **hundreds** of arguments, it was impossible, and **very** alarming. it was clear to me that no one was watching the shop, so i immediately set about removing it from my project. and now, this.

-
@peter Who could have seen this coming?
-
i was using this package in one of my projects. i found it had a bug, and when i went to maybe try to make a contribution to the open source repository, i found it to be a huge shitpile of vibe-coded mess. methods that were thousands of lines long with **hundreds** of arguments, it was impossible, and **very** alarming. it was clear to me that no one was watching the shop, so i immediately set about removing it from my project. and now, this.

there are **tons** of AI-related projects that use LiteLLM. it is a key part of the basic infrastructure of LLM-based development. if you use an LLM-based project, there is a good chance it uses LiteLLM.
-
there are **tons** of AI-related projects that use LiteLLM. it is a key part of the basic infrastructure of LLM-based development. if you use an LLM-based project, there is a good chance it uses LiteLLM.
(if you're curious, it does this very useful thing of standardizing LLM APIs into a single format. makes it easy for your app to switch between Anthropic, OpenAI, Google, z.ai, etc.)
-
(if you're curious, it does this very useful thing of standardizing LLM APIs into a single format. makes it easy for your app to switch between Anthropic, OpenAI, Google, z.ai, etc.)
this is actually a huge reason i have decided not to jump into LLM and AI agent-related development. the ecosystem is (as you would expect) run and maintained by people who are all-in on vibe coding, so a package you might like and include in your project could easily become a dangerous, unmaintainable mess within months. i don't know if people understand how brittle the whole thing is. everything is constantly, **constantly** changing.
-
this is actually a huge reason i have decided not to jump into LLM and AI agent-related development. the ecosystem is (as you would expect) run and maintained by people who are all-in on vibe coding, so a package you might like and include in your project could easily become a dangerous, unmaintainable mess within months. i don't know if people understand how brittle the whole thing is. everything is constantly, **constantly** changing.
like, it's moving **way** too fast for anyone to be able to tell if things are going to break or get injected with some malware. the whole thing is a house of cards built on top of a bomb.
-
like, it's moving **way** too fast for anyone to be able to tell if things are going to break or get injected with some malware. the whole thing is a house of cards built on top of a bomb.
oh my fucking god.
-
oh my fucking god.
let's see, who can i tag about this... @davidgerard will definitely want to know. @tante maybe. idk, tag your favorite cyber-security person. this might be the mother of all LLM supply chain attacks lol. @briankrebs
-
let's see, who can i tag about this... @davidgerard will definitely want to know. @tante maybe. idk, tag your favorite cyber-security person. this might be the mother of all LLM supply chain attacks lol. @briankrebs
plenty of good chatter on Hacker News about it. https://news.ycombinator.com/item?id=47501729
looks grim!!
-
plenty of good chatter on Hacker News about it. https://news.ycombinator.com/item?id=47501729
looks grim!!
me right now

-
plenty of good chatter on Hacker News about it. https://news.ycombinator.com/item?id=47501729
looks grim!!
-
@gfitzp oh yeah, it's those guys!
-
@gfitzp oh yeah, it's those guys!
@gfitzp oh nooooo

-
plenty of good chatter on Hacker News about it. https://news.ycombinator.com/item?id=47501729
looks grim!!
based on some commits in the repo, seems like it was these guys: https://arstechnica.com/security/2026/03/self-propagating-malware-poisons-open-source-software-and-wipes-iran-based-machines/
-
@gfitzp oh nooooo

@peter Yup, I was like "didn't I just read about these guys like an hour ago??"
-
based on some commits in the repo, seems like it was these guys: https://arstechnica.com/security/2026/03/self-propagating-malware-poisons-open-source-software-and-wipes-iran-based-machines/
@peter and @dangoodin sometimes hangs out here
-
based on some commits in the repo, seems like it was these guys: https://arstechnica.com/security/2026/03/self-propagating-malware-poisons-open-source-software-and-wipes-iran-based-machines/
picking through the various bits and pieces of this story, i kind of think what really happened is the dev accounts got pwned, and then the attackers were able to push a bad version to PyPi and people pip installed it from there. so as far as a "supply chain" attack, LiteLLM is the part of the supply chain that got attacked, it's not like they accidentally vibe-coded something malicious into their project.
-
RE: https://mstdn.social/@hkrn/116284264915152671
lol oh my god i feel **so fucking smug** right now, it's incredible. my whole body is tingling.
@peter it isn’t even necessary to compromise repos. If a malicious actor posts enough malicious code that gets mingled with the LLM training data, some poor souls will start vibe-coding malicious code directly into their own products.
-
picking through the various bits and pieces of this story, i kind of think what really happened is the dev accounts got pwned, and then the attackers were able to push a bad version to PyPi and people pip installed it from there. so as far as a "supply chain" attack, LiteLLM is the part of the supply chain that got attacked, it's not like they accidentally vibe-coded something malicious into their project.
but this still goes back to what i was saying: this AI ecosystem is developing **way** too fast and without the kind of maturity that is naturally required when you have lots of people working on a thing. so with berri.ai, you had ~2 guys in their 20s building this thing at break-neck speed that became the linchpin to waaaaay too much of the "AI" ecosystem and now look what's happened.
-
@peter it isn’t even necessary to compromise repos. If a malicious actor posts enough malicious code that gets mingled with the LLM training data, some poor souls will start vibe-coding malicious code directly into their own products.
@jongary 100%!!