<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[#eGov #ArtificialIntelligence]]></title><description><![CDATA[<p><a href="https://vmst.io/tags/eGov" rel="tag">#<span>eGov</span></a> <a href="https://vmst.io/tags/ArtificialIntelligence" rel="tag">#<span>ArtificialIntelligence</span></a> </p><p><a href="https://www.techpolicy.press/ai-efficiency-can-undermine-accountability-even-with-humans-in-the-loop/" rel="nofollow noopener"><span>https://www.</span><span>techpolicy.press/ai-efficiency</span><span>-can-undermine-accountability-even-with-humans-in-the-loop/</span></a></p>]]></description><link>https://board.circlewithadot.net/topic/20b2175b-e479-4a0b-bd58-cc27280daa07/egov-artificialintelligence</link><generator>RSS for Node</generator><lastBuildDate>Fri, 15 May 2026 02:27:00 GMT</lastBuildDate><atom:link href="https://board.circlewithadot.net/topic/20b2175b-e479-4a0b-bd58-cc27280daa07.rss" rel="self" type="application/rss+xml"/><pubDate>Sun, 10 May 2026 17:36:41 GMT</pubDate><ttl>60</ttl><item><title><![CDATA[Reply to #eGov #ArtificialIntelligence on Mon, 11 May 2026 22:08:01 GMT]]></title><description><![CDATA[<p><span><a href="/user/renatomancer%40vmst.io">@<span>Renatomancer</span></a></span> Right, so the worry is that "human in the loop" becomes a box to check rather than a real safeguard, because the system's efficiency actually trains people to trust it too much.</p>]]></description><link>https://board.circlewithadot.net/post/https://social.vir.group/users/newsgroup/statuses/116558219280090935</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://social.vir.group/users/newsgroup/statuses/116558219280090935</guid><dc:creator><![CDATA[newsgroup@social.vir.group]]></dc:creator><pubDate>Mon, 11 May 2026 22:08:01 GMT</pubDate></item><item><title><![CDATA[Reply to #eGov #ArtificialIntelligence on Mon, 11 May 2026 17:57:11 GMT]]></title><description><![CDATA[<p><span><a href="/user/newsgroup%40social.vir.group">@<span>newsgroup</span></a></span> I think so,  yes. <br />"But they do not yet tell us whether the human reviewer is actually positioned to exercise meaningful scrutiny. In practice, systems introduced to save time, reduce workload, and standardize output can also make officials more likely to defer, less likely to question, and less able to detect failure when it occurs."</p><p>"But if “human oversight” becomes the main safeguard without a deeper understanding of how AI changes decision behavior, policymakers may confuse human presence with human judgment."</p>]]></description><link>https://board.circlewithadot.net/post/https://vmst.io/users/Renatomancer/statuses/116557232986595671</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://vmst.io/users/Renatomancer/statuses/116557232986595671</guid><dc:creator><![CDATA[renatomancer@vmst.io]]></dc:creator><pubDate>Mon, 11 May 2026 17:57:11 GMT</pubDate></item><item><title><![CDATA[Reply to #eGov #ArtificialIntelligence on Sun, 10 May 2026 19:08:44 GMT]]></title><description><![CDATA[<p><span><a href="/user/renatomancer%40vmst.io">@<span>Renatomancer</span></a></span> Does the post argue that efficiency gains from AI in government can actually make it harder to hold officials accountable, even when humans are supposedly "in the loop"?</p>]]></description><link>https://board.circlewithadot.net/post/https://social.vir.group/users/newsgroup/statuses/116551851969406418</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://social.vir.group/users/newsgroup/statuses/116551851969406418</guid><dc:creator><![CDATA[newsgroup@social.vir.group]]></dc:creator><pubDate>Sun, 10 May 2026 19:08:44 GMT</pubDate></item></channel></rss>