This is bad.
-
@nausicaa @astraluma As @joelle pointed out, Claude is also a name that real people have. @SnoopJ's cantrip is going to be less susceptible to false positives by filtering on "anthropic.com" as well.
SnoopJ (@SnoopJ@hachyderm.io)
@theorangetheme@en.osm.town @xgranade@wandering.shop here are the commits on `main` where it's explicitly a co-author: (Edit: I missed a few commits because I hadn't pulled :picardfacepalm:) ``` $ git log --oneline -i --grep "Co-authored-by: Claude.*anthropic\.com" 300de1e98ac gh-86519: Add prefixmatch APIs to the re module (GH-31137) ac8b5b68900 gh-143650: Fix importlib race condition on import failure (GH-143651) 9b8d59c136c gh-72798: Add mapping example to str.translate documentation (#144454) 34e5a63f145 gh-141444: Replace dead URL in urllib.robotparser example (GH-144443) 59f247e43bc gh-115952: Fix a potential virtual memory allocation denial of service in pickle (GH-119204) 5b1862bdd80 gh-87512: Fix `subprocess` using `timeout=` on Windows blocking with a large `input=` (GH-142058) cc6bc4c97f7 GH-134453: Fix subprocess memoryview input handling on POSIX (GH-134949) 532c37695d0 gh-137134: Update SQLite to 3.50.4 for binary releases (GH-137135) ```
Hachyderm.io (hachyderm.io)
@xgranade @astraluma @joelle @SnoopJ Fair. Given the current scale, I just clicked through to check the different commits, but that doesn't scale as well as SnoopJ's approach.
-
@xgranade @astraluma @joelle @SnoopJ Fair. Given the current scale, I just clicked through to check the different commits, but that doesn't scale as well as SnoopJ's approach.
@nausicaa @astraluma @joelle @SnoopJ That's fair, too, this is so far a small handful and it's not too hard to manually validate that positives are actually true positives.
-
@SnoopJ @dave @theorangetheme That's fair, yeah. My point is more I don't understand the exact shape of the risk... if I redistribute code that was generated by an AI agent, what additional risk if any do I incur?
@xgranade @dave @theorangetheme IMO the risk profile from a legal liability standpoint is exactly the same as if you'd written it by hand
that is, if you distribute a machine-generated copy of a protected work, that doesn't really factor into the ability of that work's owner to sue you for said distribution. the owner has as much standing (in the legalistic sense) as they would if you'd copied and pasted by hand
now the actual *trial* that might arise could have some differences, especially where a judge's discretion is involved (e.g. in awarding damages), but considering how things have gone in the courts so far, I feel reasonably confident in saying that a litigant with a big enough warchest to be a pain in the ass in court over it is going to get treated about the same?
(which might be a complicated way to say "the legalistic arguments are moot, whoever has the deeper pockets wins" but I do enjoy pondering the legal theory even if I know how little it matters to the legal system that actually exists)
-
I'm gonna be real with folks here. I fucked up, and bad, with my participation in the open-slopware list. As a result, I'm not the right person to do it, but there has to be some kind of accounting for what damage AI is doing to open source.
For all the whinging about "supply chains" over the past few years, it *is* a problem when your code suddenly depends on AI, even if only indirectly.
@xgranade As someone who doesn't know anything about open-slopware, what was bad about it?
-
R relay@relay.infosec.exchange shared this topic