<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[This study was published on April 20.]]></title><description><![CDATA[<p>This study was published on April 20.</p><p>The short answer is yes.</p><p>"By presenting prompts as cyberpunk short fiction, theological disputation, or mythopoetic metaphor for the LLM to analyze, the AHB assesses whether major AI models can be manipulated into complying with dangerous requests they'd normally refuse."</p><p>Cornell University: Adversarial Humanities Benchmark: Results on Stylistic Robustness in Frontier Model Safety <a href="https://arxiv.org/abs/2604.18487" rel="nofollow noopener"><span>https://</span><span>arxiv.org/abs/2604.18487</span><span></span></a></p><p>PC Gamer: AI is 10 to 20 times more likely to help you build a bomb if you hide your request in cyberpunk fiction, new research paper says <a href="https://www.pcgamer.com/software/ai/ai-is-10-to-20-times-more-likely-to-help-you-build-a-bomb-if-you-hide-your-request-in-cyberpunk-fiction-new-research-paper-says/" rel="nofollow noopener"><span>https://www.</span><span>pcgamer.com/software/ai/ai-is-</span><span>10-to-20-times-more-likely-to-help-you-build-a-bomb-if-you-hide-your-request-in-cyberpunk-fiction-new-research-paper-says/</span></a> <a href="https://infosec.exchange/tags/LLM" rel="tag">#<span>LLM</span></a></p>]]></description><link>https://board.circlewithadot.net/topic/de348701-69d7-4e2f-9a27-e7b5cab0f3af/this-study-was-published-on-april-20.</link><generator>RSS for Node</generator><lastBuildDate>Fri, 15 May 2026 06:40:26 GMT</lastBuildDate><atom:link href="https://board.circlewithadot.net/topic/de348701-69d7-4e2f-9a27-e7b5cab0f3af.rss" rel="self" type="application/rss+xml"/><pubDate>Thu, 23 Apr 2026 17:19:27 GMT</pubDate><ttl>60</ttl></channel></rss>