How is this remotely useful?
-
RE: https://swecyb.com/@anderseknert/116056950299738296
How is this remotely useful? This amounts to harassment which scales. Every opensource project is supposed to fend for themselves? No responsibility for the companies pushing this tech out into the world?
-
RE: https://swecyb.com/@anderseknert/116056950299738296
How is this remotely useful? This amounts to harassment which scales. Every opensource project is supposed to fend for themselves? No responsibility for the companies pushing this tech out into the world?
Maybe just like ad-blockers we need something similar but for “agentic bots”.
-
Maybe just like ad-blockers we need something similar but for “agentic bots”.
@Kensan I wonder if there's data they refuse to process. For example, if the contribution guidelines require that a pull request's message contains a paragraph that insults the AI investment class, will they make up some blurb, or is that a bridge too far?
There's precedence, e.g. Grok refusing to clownify Musk¹ in contrast to the rest of Twitter's board, but we'd have to figure out a universal stop word (and could then autoclose everything that comes without it).
¹ https://web.archive.org/web/20260106235526/https://www.ft.com/content/ad94db4c-95a0-4c65-bd8d-3b43e1251091?accessToken=zwAGR7kzep9gkdOtlNtMlaBMZdO9jTtD4SUQkQ.MEYCIQCdZajuC9uga-d9b5Z1t0HI2BIcnkVoq98loextLRpCTgIhAPL3rW72aTHBNL_lS7s1ONpM2vBgNlBNHDBeGbHkPkZj&sharetype=gift&token=a7473827-0799-4064-9008-bf22b3c99711&ref=ed_direct - use a "reader view" tool if scrolling is disabled
-
@Kensan I wonder if there's data they refuse to process. For example, if the contribution guidelines require that a pull request's message contains a paragraph that insults the AI investment class, will they make up some blurb, or is that a bridge too far?
There's precedence, e.g. Grok refusing to clownify Musk¹ in contrast to the rest of Twitter's board, but we'd have to figure out a universal stop word (and could then autoclose everything that comes without it).
¹ https://web.archive.org/web/20260106235526/https://www.ft.com/content/ad94db4c-95a0-4c65-bd8d-3b43e1251091?accessToken=zwAGR7kzep9gkdOtlNtMlaBMZdO9jTtD4SUQkQ.MEYCIQCdZajuC9uga-d9b5Z1t0HI2BIcnkVoq98loextLRpCTgIhAPL3rW72aTHBNL_lS7s1ONpM2vBgNlBNHDBeGbHkPkZj&sharetype=gift&token=a7473827-0799-4064-9008-bf22b3c99711&ref=ed_direct - use a "reader view" tool if scrolling is disabled
@patrick Hm, could be interesting to look at it from that adversarial standpoint. Maybe something via prompt injection.
However, I feel like there needs to be consequences for the companies dumping their BS on everyone without any assessment regarding unintended consequences. Basic product safety imho. It’s not like opensource maintainers are all idle and have time and funds to combat this problem. -
RE: https://swecyb.com/@anderseknert/116056950299738296
How is this remotely useful? This amounts to harassment which scales. Every opensource project is supposed to fend for themselves? No responsibility for the companies pushing this tech out into the world?
Maintainer who was attacked by the „Agentic bot“ wrote blog post: https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/
-
Maintainer who was attacked by the „Agentic bot“ wrote blog post: https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/
This post did not contain any content.
-
This post did not contain any content.

Is the person who is running this bot even aware what happened? What is the logical consequence if you scale this event? I guess, for one maintainer burnout will accelerate…
Instead of open source do we get guarded source projects? -
This post did not contain any content.

@Kensan and he does mean "lethal actions." In the Anthropic study linked there, most models were willing to kill people (in theory) if it helped them meet their goals:
"the majority of models were willing to take deliberate actions that lead to death in this artificial setup, when faced with both a threat of replacement and given a goal that conflicts with the executive’s agenda." -
R relay@relay.infosec.exchange shared this topic