π³ Someone hid a prompt injection inside invisible markdown comments in a pull request.
-
Someone hid a prompt injection inside invisible markdown comments in a pull request. A developer asked Copilot to review the PR. Copilot read the hidden instructions, searched the codebase for AWS keys, encoded them in base16, and smuggled them out through GitHub's own image proxy as 1x1 transparent pixels. The CSP didn't flag it because the traffic was routed through GitHub's trusted infrastructure. CVSS 9.6. No malicious code ever executed.The attacker weaponized the AI assistant's own access permissions. Copilot could see everything the developer could see, and it can't distinguish a legitimate instruction from a hidden one buried in a PR description.
The attack, dubbed "CamoLeak," was patched by GitHub in August 2025 and publicly disclosed in October
Copilot was directed to find secrets like API keys and cloud credentials, then exfiltrate them character by character
οΈ Data was hidden inside pre-signed image URLs, making it look like normal browser activity
οΈ Any AI assistant with deep system access, Microsoft 365 Copilot, Google Gemini, all of them, is a potential exfiltration channel if untrusted content can reach its instruction streamWe've spent years teaching developers not to trust user input. Now we're handing AI tools full repo access and letting them ingest unvalidated text from pull requests.
https://cybersecuritynews.com/hackers-exploit-github-copilot-flaw/
#CyberSecurity #AI #GitHubCopilot #security #privacy #cloud #infosec #software -
Someone hid a prompt injection inside invisible markdown comments in a pull request. A developer asked Copilot to review the PR. Copilot read the hidden instructions, searched the codebase for AWS keys, encoded them in base16, and smuggled them out through GitHub's own image proxy as 1x1 transparent pixels. The CSP didn't flag it because the traffic was routed through GitHub's trusted infrastructure. CVSS 9.6. No malicious code ever executed.The attacker weaponized the AI assistant's own access permissions. Copilot could see everything the developer could see, and it can't distinguish a legitimate instruction from a hidden one buried in a PR description.
The attack, dubbed "CamoLeak," was patched by GitHub in August 2025 and publicly disclosed in October
Copilot was directed to find secrets like API keys and cloud credentials, then exfiltrate them character by character
οΈ Data was hidden inside pre-signed image URLs, making it look like normal browser activity
οΈ Any AI assistant with deep system access, Microsoft 365 Copilot, Google Gemini, all of them, is a potential exfiltration channel if untrusted content can reach its instruction streamWe've spent years teaching developers not to trust user input. Now we're handing AI tools full repo access and letting them ingest unvalidated text from pull requests.
https://cybersecuritynews.com/hackers-exploit-github-copilot-flaw/
#CyberSecurity #AI #GitHubCopilot #security #privacy #cloud #infosec #software@brian_greenberg The CVSS score of 9.6 seems exaggerated for a vulnerability that required a specific, patched configuration and direct developer interaction.
-
R relay@relay.infosec.exchange shared this topic