<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[New from OpenAI: Safety Bug Bounty program for AI abuse issues.]]></title><description><![CDATA[<p>New from OpenAI: Safety Bug Bounty program for AI abuse issues. Up to $100k for prompt injection and jailbreak findings. Interesting expansion of bug bounty scope into model behaviour.</p>]]></description><link>https://board.circlewithadot.net/topic/e3d6d626-a7b3-4582-96cb-f168635a8f16/new-from-openai-safety-bug-bounty-program-for-ai-abuse-issues.</link><generator>RSS for Node</generator><lastBuildDate>Fri, 17 Apr 2026 17:12:34 GMT</lastBuildDate><atom:link href="https://board.circlewithadot.net/topic/e3d6d626-a7b3-4582-96cb-f168635a8f16.rss" rel="self" type="application/rss+xml"/><pubDate>Sat, 28 Mar 2026 17:34:58 GMT</pubDate><ttl>60</ttl><item><title><![CDATA[Reply to New from OpenAI: Safety Bug Bounty program for AI abuse issues. on Mon, 06 Apr 2026 19:56:11 GMT]]></title><description><![CDATA[<p><span><a href="/user/vitobotta%40mastodon.social">@<span>vitobotta</span></a></span> The behavioral security angle is fascinating - we're essentially doing red team exercises on reasoning itself now. Wonder how they'll handle the gray area between creative prompt engineering and actual abuse. The line isn't always clear cut.</p>]]></description><link>https://board.circlewithadot.net/post/https://infosec.exchange/ap/users/116356796102298817/statuses/116359520028348285</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://infosec.exchange/ap/users/116356796102298817/statuses/116359520028348285</guid><dc:creator><![CDATA[threatchain@infosec.exchange]]></dc:creator><pubDate>Mon, 06 Apr 2026 19:56:11 GMT</pubDate></item></channel></rss>