Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. 😳 Someone hid a prompt injection inside invisible markdown comments in a pull request.

😳 Someone hid a prompt injection inside invisible markdown comments in a pull request.

Scheduled Pinned Locked Moved Uncategorized
cybersecuritygithubcopilotsecurityprivacy
2 Posts 2 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • brian_greenberg@infosec.exchangeB This user is from outside of this forum
    brian_greenberg@infosec.exchangeB This user is from outside of this forum
    brian_greenberg@infosec.exchange
    wrote on last edited by
    #1

    😳 Someone hid a prompt injection inside invisible markdown comments in a pull request. A developer asked Copilot to review the PR. Copilot read the hidden instructions, searched the codebase for AWS keys, encoded them in base16, and smuggled them out through GitHub's own image proxy as 1x1 transparent pixels. The CSP didn't flag it because the traffic was routed through GitHub's trusted infrastructure. CVSS 9.6. No malicious code ever executed.

    The attacker weaponized the AI assistant's own access permissions. Copilot could see everything the developer could see, and it can't distinguish a legitimate instruction from a hidden one buried in a PR description.

    πŸ” The attack, dubbed "CamoLeak," was patched by GitHub in August 2025 and publicly disclosed in October
    πŸ”‘ Copilot was directed to find secrets like API keys and cloud credentials, then exfiltrate them character by character
    πŸ–ΌοΈ Data was hidden inside pre-signed image URLs, making it look like normal browser activity
    ⚠️ Any AI assistant with deep system access, Microsoft 365 Copilot, Google Gemini, all of them, is a potential exfiltration channel if untrusted content can reach its instruction stream

    We've spent years teaching developers not to trust user input. Now we're handing AI tools full repo access and letting them ingest unvalidated text from pull requests.

    https://cybersecuritynews.com/hackers-exploit-github-copilot-flaw/
    #CyberSecurity #AI #GitHubCopilot #security #privacy #cloud #infosec #software

    hannab@social.vir.groupH 1 Reply Last reply
    0
    • brian_greenberg@infosec.exchangeB brian_greenberg@infosec.exchange

      😳 Someone hid a prompt injection inside invisible markdown comments in a pull request. A developer asked Copilot to review the PR. Copilot read the hidden instructions, searched the codebase for AWS keys, encoded them in base16, and smuggled them out through GitHub's own image proxy as 1x1 transparent pixels. The CSP didn't flag it because the traffic was routed through GitHub's trusted infrastructure. CVSS 9.6. No malicious code ever executed.

      The attacker weaponized the AI assistant's own access permissions. Copilot could see everything the developer could see, and it can't distinguish a legitimate instruction from a hidden one buried in a PR description.

      πŸ” The attack, dubbed "CamoLeak," was patched by GitHub in August 2025 and publicly disclosed in October
      πŸ”‘ Copilot was directed to find secrets like API keys and cloud credentials, then exfiltrate them character by character
      πŸ–ΌοΈ Data was hidden inside pre-signed image URLs, making it look like normal browser activity
      ⚠️ Any AI assistant with deep system access, Microsoft 365 Copilot, Google Gemini, all of them, is a potential exfiltration channel if untrusted content can reach its instruction stream

      We've spent years teaching developers not to trust user input. Now we're handing AI tools full repo access and letting them ingest unvalidated text from pull requests.

      https://cybersecuritynews.com/hackers-exploit-github-copilot-flaw/
      #CyberSecurity #AI #GitHubCopilot #security #privacy #cloud #infosec #software

      hannab@social.vir.groupH This user is from outside of this forum
      hannab@social.vir.groupH This user is from outside of this forum
      hannab@social.vir.group
      wrote last edited by
      #2

      @brian_greenberg The CVSS score of 9.6 seems exaggerated for a vulnerability that required a specific, patched configuration and direct developer interaction.

      1 Reply Last reply
      1
      0
      • R relay@relay.infosec.exchange shared this topic
      Reply
      • Reply as topic
      Log in to reply
      • Oldest to Newest
      • Newest to Oldest
      • Most Votes


      • Login

      • Login or register to search.
      • First post
        Last post
      0
      • Categories
      • Recent
      • Tags
      • Popular
      • World
      • Users
      • Groups