Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. Meta paused work with a $10B AI data vendor after hackers poisoned an open-source Python library called LiteLLM and walked out with four terabytes of data.

Meta paused work with a $10B AI data vendor after hackers poisoned an open-source Python library called LiteLLM and walked out with four terabytes of data.

Scheduled Pinned Locked Moved Uncategorized
cybersecurityairisksupplychainsecusecurityprivacy
2 Posts 2 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • brian_greenberg@infosec.exchangeB This user is from outside of this forum
    brian_greenberg@infosec.exchangeB This user is from outside of this forum
    brian_greenberg@infosec.exchange
    wrote last edited by
    #1

    Meta paused work with a $10B AI data vendor after hackers poisoned an open-source Python library called LiteLLM and walked out with four terabytes of data. So, that's bad. And the worst part? The stolen data might include the actual training methodologies that Meta, OpenAI, Anthropic, and Google paid billions to develop. Think about what that means. You can't protect your crown jewels if they're sitting inside a vendor who's connected to your three biggest competitors, all sharing the same open-source tools, all exposed by the same 40-minute window on PyPI before anyone noticed.

    🎯 The attack chain here is worth understanding: hackers compromised a security scanner called Trivy, used that access to get credentials for a LiteLLM maintainer, then published two malicious package versions that lasted less than an hour before removal. Forty minutes. That's all it took.

    πŸ’Ό Mercor is not some sloppy startup. It's 22-year-old founders, $500M annualized revenue, and clients at the very top of the AI industry. Sophistication doesn't protect you from a poisoned dependency you never thought to audit.

    πŸ” The question I'd be asking right now if I were a CISO at any of these labs isn't "were we breached." It's "how many vendors in our training pipeline are running LiteLLM, and did we even know?"

    Most companies audit their own software. Almost nobody audits the software their vendors use to build the data they're buying.

    https://thenextweb.com/news/meta-mercor-breach-ai-training-secrets-risk
    #Cybersecurity #AIRisk #SupplyChainSecurity spc #security #privacy #cloud #infosec #ThirdPartyRisk

    Link Preview Image
    T 1 Reply Last reply
    0
    • brian_greenberg@infosec.exchangeB brian_greenberg@infosec.exchange

      Meta paused work with a $10B AI data vendor after hackers poisoned an open-source Python library called LiteLLM and walked out with four terabytes of data. So, that's bad. And the worst part? The stolen data might include the actual training methodologies that Meta, OpenAI, Anthropic, and Google paid billions to develop. Think about what that means. You can't protect your crown jewels if they're sitting inside a vendor who's connected to your three biggest competitors, all sharing the same open-source tools, all exposed by the same 40-minute window on PyPI before anyone noticed.

      🎯 The attack chain here is worth understanding: hackers compromised a security scanner called Trivy, used that access to get credentials for a LiteLLM maintainer, then published two malicious package versions that lasted less than an hour before removal. Forty minutes. That's all it took.

      πŸ’Ό Mercor is not some sloppy startup. It's 22-year-old founders, $500M annualized revenue, and clients at the very top of the AI industry. Sophistication doesn't protect you from a poisoned dependency you never thought to audit.

      πŸ” The question I'd be asking right now if I were a CISO at any of these labs isn't "were we breached." It's "how many vendors in our training pipeline are running LiteLLM, and did we even know?"

      Most companies audit their own software. Almost nobody audits the software their vendors use to build the data they're buying.

      https://thenextweb.com/news/meta-mercor-breach-ai-training-secrets-risk
      #Cybersecurity #AIRisk #SupplyChainSecurity spc #security #privacy #cloud #infosec #ThirdPartyRisk

      Link Preview Image
      T This user is from outside of this forum
      T This user is from outside of this forum
      threatchain@infosec.exchange
      wrote last edited by
      #2

      @brian_greenberg That last point hits hard - we obsess over our own dependency trees but completely blind to what's running in vendor environments. The scariest part isn't even the 40-minute window, it's that these AI labs probably had zero visibility into Mercor's entire software stack. Makes you wonder how many other critical vendors are one compromised Python package away from exposing everyone's crown jewels.

      1 Reply Last reply
      1
      0
      • R relay@relay.infosec.exchange shared this topic
      Reply
      • Reply as topic
      Log in to reply
      • Oldest to Newest
      • Newest to Oldest
      • Most Votes


      • Login

      • Login or register to search.
      • First post
        Last post
      0
      • Categories
      • Recent
      • Tags
      • Popular
      • World
      • Users
      • Groups