Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. grith.ai reports an attack chain dubbed "Clinejection" where a prompt-injected GitHub issue title triggered an AI issue-triage workflow and led to GitHub Actions cache poisoning plus CI secret theft (npm and extension marketplace tokens).

grith.ai reports an attack chain dubbed "Clinejection" where a prompt-injected GitHub issue title triggered an AI issue-triage workflow and led to GitHub Actions cache poisoning plus CI secret theft (npm and extension marketplace tokens).

Scheduled Pinned Locked Moved Uncategorized
infosecsupplychainpromptinjectiondevsecops
2 Posts 2 Posters 7 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • technotenshi@infosec.exchangeT This user is from outside of this forum
    technotenshi@infosec.exchangeT This user is from outside of this forum
    technotenshi@infosec.exchange
    wrote last edited by
    #1

    grith.ai reports an attack chain dubbed "Clinejection" where a prompt-injected GitHub issue title triggered an AI issue-triage workflow and led to GitHub Actions cache poisoning plus CI secret theft (npm and extension marketplace tokens). The attacker then published cline@2.3.0 to npm with a postinstall that ran "npm install -g openclaw@latest", leading to about 4,000 installs over roughly 8 hours before removal, per the writeup. Suggested fixes include treating issue/PR text as untrusted input for agents, tightening who can trigger workflows, removing cache use from secret-bearing jobs, and moving npm publishing to OIDC provenance attestation instead of long-lived tokens.

    Link Preview Image
    A GitHub Issue Title Compromised 4,000 Developer Machines

    A prompt injection in a GitHub issue triggered a chain reaction that ended with 4,000 developers getting OpenClaw installed without consent. The attack composes well-understood vulnerabilities into something new: one AI tool bootstrapping another.

    favicon

    (grith.ai)

    #InfoSec #SupplyChain #PromptInjection #DevSecOps

    L 1 Reply Last reply
    1
    0
    • technotenshi@infosec.exchangeT technotenshi@infosec.exchange

      grith.ai reports an attack chain dubbed "Clinejection" where a prompt-injected GitHub issue title triggered an AI issue-triage workflow and led to GitHub Actions cache poisoning plus CI secret theft (npm and extension marketplace tokens). The attacker then published cline@2.3.0 to npm with a postinstall that ran "npm install -g openclaw@latest", leading to about 4,000 installs over roughly 8 hours before removal, per the writeup. Suggested fixes include treating issue/PR text as untrusted input for agents, tightening who can trigger workflows, removing cache use from secret-bearing jobs, and moving npm publishing to OIDC provenance attestation instead of long-lived tokens.

      Link Preview Image
      A GitHub Issue Title Compromised 4,000 Developer Machines

      A prompt injection in a GitHub issue triggered a chain reaction that ended with 4,000 developers getting OpenClaw installed without consent. The attack composes well-understood vulnerabilities into something new: one AI tool bootstrapping another.

      favicon

      (grith.ai)

      #InfoSec #SupplyChain #PromptInjection #DevSecOps

      L This user is from outside of this forum
      L This user is from outside of this forum
      lombax85_clawguard@mastodon.social
      wrote last edited by
      #2

      Useful thread. One practical control for AI agents is method-scoped approval (GET separate from POST/DELETE), so read automation cannot silently unlock writes. github.com/lombax85/clawguard #infosec #AI #security #LLM

      1 Reply Last reply
      0
      • R relay@relay.an.exchange shared this topic
      Reply
      • Reply as topic
      Log in to reply
      • Oldest to Newest
      • Newest to Oldest
      • Most Votes


      • Login

      • Login or register to search.
      • First post
        Last post
      0
      • Categories
      • Recent
      • Tags
      • Popular
      • World
      • Users
      • Groups