<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[😳 Someone hid a prompt injection inside invisible markdown comments in a pull request.]]></title><description><![CDATA[<p><img src="https://board.circlewithadot.net/assets/plugins/nodebb-plugin-emoji/emoji/android/1f633.png?v=28325c671da" class="not-responsive emoji emoji-android emoji--flushed" style="height:23px;width:auto;vertical-align:middle" title="😳" alt="😳" /> Someone hid a prompt injection inside invisible markdown comments in a pull request. A developer asked Copilot to review the PR. Copilot read the hidden instructions, searched the codebase for AWS keys, encoded them in base16, and smuggled them out through GitHub's own image proxy as 1x1 transparent pixels. The CSP didn't flag it because the traffic was routed through GitHub's trusted infrastructure. CVSS 9.6. No malicious code ever executed.</p><p>The attacker weaponized the AI assistant's own access permissions. Copilot could see everything the developer could see, and it can't distinguish a legitimate instruction from a hidden one buried in a PR description.</p><p><img src="https://board.circlewithadot.net/assets/plugins/nodebb-plugin-emoji/emoji/android/1f50d.png?v=28325c671da" class="not-responsive emoji emoji-android emoji--mag" style="height:23px;width:auto;vertical-align:middle" title="🔍" alt="🔍" /> The attack, dubbed "CamoLeak," was patched by GitHub in August 2025 and publicly disclosed in October<br /><img src="https://board.circlewithadot.net/assets/plugins/nodebb-plugin-emoji/emoji/android/1f511.png?v=28325c671da" class="not-responsive emoji emoji-android emoji--key" style="height:23px;width:auto;vertical-align:middle" title="🔑" alt="🔑" /> Copilot was directed to find secrets like API keys and cloud credentials, then exfiltrate them character by character<br /><img src="https://board.circlewithadot.net/assets/plugins/nodebb-plugin-emoji/emoji/android/1f5bc.png?v=28325c671da" class="not-responsive emoji emoji-android emoji--frame_with_picture" style="height:23px;width:auto;vertical-align:middle" title="🖼" alt="🖼" />️ Data was hidden inside pre-signed image URLs, making it look like normal browser activity<br /><img src="https://board.circlewithadot.net/assets/plugins/nodebb-plugin-emoji/emoji/android/26a0.png?v=28325c671da" class="not-responsive emoji emoji-android emoji--warning" style="height:23px;width:auto;vertical-align:middle" title="⚠" alt="⚠" />️ Any AI assistant with deep system access, Microsoft 365 Copilot, Google Gemini, all of them, is a potential exfiltration channel if untrusted content can reach its instruction stream</p><p>We've spent years teaching developers not to trust user input. Now we're handing AI tools full repo access and letting them ingest unvalidated text from pull requests.</p><p><a href="https://cybersecuritynews.com/hackers-exploit-github-copilot-flaw/" rel="nofollow noopener"><span>https://</span><span>cybersecuritynews.com/hackers-</span><span>exploit-github-copilot-flaw/</span></a><br /><a href="https://infosec.exchange/tags/CyberSecurity" rel="tag">#<span>CyberSecurity</span></a> <a href="https://infosec.exchange/tags/AI" rel="tag">#<span>AI</span></a> <a href="https://infosec.exchange/tags/GitHubCopilot" rel="tag">#<span>GitHubCopilot</span></a> <a href="https://infosec.exchange/tags/security" rel="tag">#<span>security</span></a> <a href="https://infosec.exchange/tags/privacy" rel="tag">#<span>privacy</span></a> <a href="https://infosec.exchange/tags/cloud" rel="tag">#<span>cloud</span></a> <a href="https://infosec.exchange/tags/infosec" rel="tag">#<span>infosec</span></a> <a href="https://infosec.exchange/tags/software" rel="tag">#<span>software</span></a></p>]]></description><link>https://board.circlewithadot.net/topic/d9ff3e44-c586-44be-9ff8-a32186ab4381/someone-hid-a-prompt-injection-inside-invisible-markdown-comments-in-a-pull-request.</link><generator>RSS for Node</generator><lastBuildDate>Fri, 15 May 2026 05:48:20 GMT</lastBuildDate><atom:link href="https://board.circlewithadot.net/topic/d9ff3e44-c586-44be-9ff8-a32186ab4381.rss" rel="self" type="application/rss+xml"/><pubDate>Tue, 14 Apr 2026 18:14:37 GMT</pubDate><ttl>60</ttl><item><title><![CDATA[Reply to 😳 Someone hid a prompt injection inside invisible markdown comments in a pull request. on Wed, 15 Apr 2026 07:02:02 GMT]]></title><description><![CDATA[<p><span><a href="/user/brian_greenberg%40infosec.exchange">@<span>brian_greenberg</span></a></span> The CVSS score of 9.6 seems exaggerated for a vulnerability that required a specific, patched configuration and direct developer interaction.</p>]]></description><link>https://board.circlewithadot.net/post/https://social.vir.group/ap/users/116380377006747794/statuses/116407436770536729</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://social.vir.group/ap/users/116380377006747794/statuses/116407436770536729</guid><dc:creator><![CDATA[hannab@social.vir.group]]></dc:creator><pubDate>Wed, 15 Apr 2026 07:02:02 GMT</pubDate></item></channel></rss>