Claude code source "leaks" in a mapfile
-
- Claude code source "leaks" in a mapfile
- people immediately use the code laundering machines to code launder the code laundering frontend
- now many dubious open source-ish knockoffs in python and rust being derived directly from the source
What's anthropic going to do, sue them? Insist in court that LLM recreating copyrighted code is a violation of copyright???
-
- Claude code source "leaks" in a mapfile
- people immediately use the code laundering machines to code launder the code laundering frontend
- now many dubious open source-ish knockoffs in python and rust being derived directly from the source
What's anthropic going to do, sue them? Insist in court that LLM recreating copyrighted code is a violation of copyright???
This code is so fucking funny dude I swear to god. I have wanted to read the internal prompts for so long and I am laughing so hard at how much of them are like "don't break the law, please do not break the law, please please please be good!!!!" Very Serious Ethical Alignment Technology
-
This code is so fucking funny dude I swear to god. I have wanted to read the internal prompts for so long and I am laughing so hard at how much of them are like "don't break the law, please do not break the law, please please please be good!!!!" Very Serious Ethical Alignment Technology
My dogs I am crying. They have a whole series of exception types that end with
_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHSand the docstring explains this is "to confirm you've verified the message contains no sensitive data." Like the LLM resorts to naming its variables with prompt text to remind it to not leak data while writing its code, which, of course, it ignores and prints the error directly.

-
My dogs I am crying. They have a whole series of exception types that end with
_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHSand the docstring explains this is "to confirm you've verified the message contains no sensitive data." Like the LLM resorts to naming its variables with prompt text to remind it to not leak data while writing its code, which, of course, it ignores and prints the error directly.

So the reason that Claude code is capable of outputting valid json is because if the prompt text suggests it should be JSON then it enters a special loop in the main query engine that just validates it against JSON schema for JSON and then feeds the data with the error message back into itself until it is valid JSON or a retry limit is reached.
This code is so eye wateringly spaghetti so I am still trying to see if this is true, but this seems to be how it not only returns json to the user, but how it handles all LLM-to-JSON, including internal output from its tools. There appears to be an unconditional hook where if the JSON output tool is present in the session config at all, then all tool calls must be followed by the "force into JSON" loop.
If that's true, that's just mind blowingly expensive
edit: please note that unless I say otherwise all evaluations here are just from my skimming through the code on my phone and have not been validated in any way that should cause you to be upset with me for impugning the good name of anthropic
-
R relay@relay.publicsquare.global shared this topicR relay@relay.mycrowd.ca shared this topicR relay@relay.infosec.exchange shared this topic
-
So the reason that Claude code is capable of outputting valid json is because if the prompt text suggests it should be JSON then it enters a special loop in the main query engine that just validates it against JSON schema for JSON and then feeds the data with the error message back into itself until it is valid JSON or a retry limit is reached.
This code is so eye wateringly spaghetti so I am still trying to see if this is true, but this seems to be how it not only returns json to the user, but how it handles all LLM-to-JSON, including internal output from its tools. There appears to be an unconditional hook where if the JSON output tool is present in the session config at all, then all tool calls must be followed by the "force into JSON" loop.
If that's true, that's just mind blowingly expensive
edit: please note that unless I say otherwise all evaluations here are just from my skimming through the code on my phone and have not been validated in any way that should cause you to be upset with me for impugning the good name of anthropic
MAKE NO MISTAKES LMAO
-
MAKE NO MISTAKES LMAO
Oh cool so its explicitly programmed to hack as long as you tell it you're a pentester

-
Oh cool so its explicitly programmed to hack as long as you tell it you're a pentester

I am just chanting "please don't be a hoax please don't be a hoax please be real please be real" looking at the date on the calendar
-
M mttaggart@infosec.exchange shared this topic
-
I am just chanting "please don't be a hoax please don't be a hoax please be real please be real" looking at the date on the calendar
I'm seeing people on orange forum confirming that they did indeed see the sourcemap posted on npm before the version was yanked, so I am inclined to believe "real." Someone can do some kind of structural ast comparison or whatever you call it to validate that the decompiled source map matches the obfuscated release version, but that's not gonna be how I spend my day https://news.ycombinator.com/item?id=47584540
-
- Claude code source "leaks" in a mapfile
- people immediately use the code laundering machines to code launder the code laundering frontend
- now many dubious open source-ish knockoffs in python and rust being derived directly from the source
What's anthropic going to do, sue them? Insist in court that LLM recreating copyrighted code is a violation of copyright???
My schaden is nicely freuded after seeing both this code dump and the fustercluck where people on Anthropic’s $100/$200 monthly plans are blowing through their 5-hour and weekly token allotments in no time flat.
Linked Reddit thread has numerous examples of pissed-off users, my favorite so far being the person who blew through the 5-hour quota trying to get Claude to realize that the 24th of March this year was not, in fact, a Monday. https://www.reddit.com/r/ClaudeAI/comments/1s7fcjf/claude_usage_limits_discussion_megathread_ongoing/
-
I'm seeing people on orange forum confirming that they did indeed see the sourcemap posted on npm before the version was yanked, so I am inclined to believe "real." Someone can do some kind of structural ast comparison or whatever you call it to validate that the decompiled source map matches the obfuscated release version, but that's not gonna be how I spend my day https://news.ycombinator.com/item?id=47584540
There is a lot of clientside behavior gated behind the environment variable
USER_TYPE=antthat seems to be read directly off the node env var accessor. No idea how much of that would be serverside verified but boy is that sloppy. They are often labeled in comments as "anthropic only" or "internal only," so the intention to gate from external users is clear lol
-
There is a lot of clientside behavior gated behind the environment variable
USER_TYPE=antthat seems to be read directly off the node env var accessor. No idea how much of that would be serverside verified but boy is that sloppy. They are often labeled in comments as "anthropic only" or "internal only," so the intention to gate from external users is clear lol
(I need to go do my actual job now, but I'll be back tonight with an actual IDE instead of just scrolling, jaw agape, on my phone, seeing the absolute dogshit salad that was the product of enough wealth to meet some large proportion of all real human needs, globally.)
-
(I need to go do my actual job now, but I'll be back tonight with an actual IDE instead of just scrolling, jaw agape, on my phone, seeing the absolute dogshit salad that was the product of enough wealth to meet some large proportion of all real human needs, globally.)
reminder that anthropic ran (and is still running) an ENTIRE AD CAMPAIGN around "Claude code is written with claude code" and after the source was leaked that has got to be the funniest self-own in the history of advertising because OH BOY IT SHOWS.
it's hard to get across in microblogging format just how big of a dumpster fire this thing is, because what it "looks like" is "everything is done a dozen times in a dozen different ways, and everything is just sort of jammed in anywhere. to the degree there is any kind of coherent structure like 'tools' and 'agents' and whatnot, it's entirely undercut by how the entire rest of the code might have written in some special condition that completely changes how any such thing might work." I have read a lot of unrefined, straight from the LLM code, and Claude code is a masterclass in exactly what you get when you do that - an incomprehensible mess.

-
reminder that anthropic ran (and is still running) an ENTIRE AD CAMPAIGN around "Claude code is written with claude code" and after the source was leaked that has got to be the funniest self-own in the history of advertising because OH BOY IT SHOWS.
it's hard to get across in microblogging format just how big of a dumpster fire this thing is, because what it "looks like" is "everything is done a dozen times in a dozen different ways, and everything is just sort of jammed in anywhere. to the degree there is any kind of coherent structure like 'tools' and 'agents' and whatnot, it's entirely undercut by how the entire rest of the code might have written in some special condition that completely changes how any such thing might work." I have read a lot of unrefined, straight from the LLM code, and Claude code is a masterclass in exactly what you get when you do that - an incomprehensible mess.

-
OK i can't focus on work and keep looking at this repo.
So after every "subagent" runs, claude code creates another "agent" to check on whether the first "agent" did the thing it was supposed to. I don't know about you but i smell a bit of a problem, if you can't trust whether one "agent" with a very big fancy model did something, how in the fuck are you supposed to trust another "agent" running on the smallest crappiest model?
That's not the funny part, that's obvious and fundamental to the entire show here. HOWEVER RECALL the above JSON Schema Verification thing that is unconditionally added onto the end of every round of LLM calls. the mechanism for adding that hook is... JUST FUCKING ASKING THE MODEL TO CALL THAT TOOL. second pic is registering a hook s.t. "after some stop state happens, if there isn't a message indicating that we have successfully called the JSON validation thing, prompt the model saying "you must call the json validation thing"
this shit sucks so bad they can't even CALL THEIR OWN CODE FROM INSIDE THEIR OWN CODE.
Look at the comment on pic 3 - "e.g. agent finished without calling structured output tool" - that's common enough that they have a whole goddamn error category for it, and the way it's handled is by just pretending the job was cancelled and nothing happened.



-
OK i can't focus on work and keep looking at this repo.
So after every "subagent" runs, claude code creates another "agent" to check on whether the first "agent" did the thing it was supposed to. I don't know about you but i smell a bit of a problem, if you can't trust whether one "agent" with a very big fancy model did something, how in the fuck are you supposed to trust another "agent" running on the smallest crappiest model?
That's not the funny part, that's obvious and fundamental to the entire show here. HOWEVER RECALL the above JSON Schema Verification thing that is unconditionally added onto the end of every round of LLM calls. the mechanism for adding that hook is... JUST FUCKING ASKING THE MODEL TO CALL THAT TOOL. second pic is registering a hook s.t. "after some stop state happens, if there isn't a message indicating that we have successfully called the JSON validation thing, prompt the model saying "you must call the json validation thing"
this shit sucks so bad they can't even CALL THEIR OWN CODE FROM INSIDE THEIR OWN CODE.
Look at the comment on pic 3 - "e.g. agent finished without calling structured output tool" - that's common enough that they have a whole goddamn error category for it, and the way it's handled is by just pretending the job was cancelled and nothing happened.



So ars (first pic) ran a piece similar to the one that the rest of the tech journals did "claude code source leaked, whoopsie! programmers are taking a look at it, some are finding problems, but others are saying it's really awesome."
like "inspiring and humbling" is not the word dog. I don't spend time on fucking twitter anymore so i don't hang around people who might find this fucking dogshit tornado inspiring and humbling. Even more than the tornado, i am afraid of the people who look at the tornado and say "that's super fucking awesome, i can only hope to get sucked up and shredded like lettuce in a vortex of construction debris one day"
the (almost certainly generated) blog post is the standard kind of vacuuous linkedin shillposting that one has come to expect from the gambling addicts, but i think it's illustrative: the only thing they are impressed with is the number of lines. 500k lines of code for a graph processing loop in a TUI is NOT GOOD. The only comments they make on the actual code itself is "heavily architected" (what in the fuck does that mean), "modular" (no the fuck it is not), and it runs on bun rather than node (so??? they own it!!!! of course it does!!!). and then the predictable close of "oh and also i'm also writing exactly the same thing and come check out mine"
the only* people this shit impresses are people who don't know what they're looking at and just appreciate the size of it all, or have a bridge to sell.
* I got in trouble last time i said "only" - nothing in nature is ever "only this or that," i am speaking emphatically and figuratively. there are other kinds of people who are impressed with LLMs too. Please also note that my anger is directed towards the grifters profiting off of it and people who are pouring gas on the fire and enabling this catastrophe by giving it intellectual, social, and other cover. I know there are folks who just chat with the bots because they need someone to talk to, etcetera and so on. people in need who are just making use of whatever they can grab to hang on are not who I am criticizing, and never are.




-
So ars (first pic) ran a piece similar to the one that the rest of the tech journals did "claude code source leaked, whoopsie! programmers are taking a look at it, some are finding problems, but others are saying it's really awesome."
like "inspiring and humbling" is not the word dog. I don't spend time on fucking twitter anymore so i don't hang around people who might find this fucking dogshit tornado inspiring and humbling. Even more than the tornado, i am afraid of the people who look at the tornado and say "that's super fucking awesome, i can only hope to get sucked up and shredded like lettuce in a vortex of construction debris one day"
the (almost certainly generated) blog post is the standard kind of vacuuous linkedin shillposting that one has come to expect from the gambling addicts, but i think it's illustrative: the only thing they are impressed with is the number of lines. 500k lines of code for a graph processing loop in a TUI is NOT GOOD. The only comments they make on the actual code itself is "heavily architected" (what in the fuck does that mean), "modular" (no the fuck it is not), and it runs on bun rather than node (so??? they own it!!!! of course it does!!!). and then the predictable close of "oh and also i'm also writing exactly the same thing and come check out mine"
the only* people this shit impresses are people who don't know what they're looking at and just appreciate the size of it all, or have a bridge to sell.
* I got in trouble last time i said "only" - nothing in nature is ever "only this or that," i am speaking emphatically and figuratively. there are other kinds of people who are impressed with LLMs too. Please also note that my anger is directed towards the grifters profiting off of it and people who are pouring gas on the fire and enabling this catastrophe by giving it intellectual, social, and other cover. I know there are folks who just chat with the bots because they need someone to talk to, etcetera and so on. people in need who are just making use of whatever they can grab to hang on are not who I am criticizing, and never are.




(those numbers are also totally fucking wrong, the query engine is not 46ksloc, i have no idea what those numbers correspond to, as far as i can tell "nothing" and this is just hallucinated dogshit that is what i guess passes for high quality public comment nowadays)
-
(those numbers are also totally fucking wrong, the query engine is not 46ksloc, i have no idea what those numbers correspond to, as far as i can tell "nothing" and this is just hallucinated dogshit that is what i guess passes for high quality public comment nowadays)
If i can slip in a quick PSA while my typically sleepy notifications are exploding, these are all very annoying things to say and you might want to reconsider whether they're worth ever saying in a reply directed at someone else - who are they for? what do they add?
- "why are you surprised"/"even worse than
{thing}itself is people being surprised at{thing}": unless the person is saying "i am surprised by this" they are likely not surprised by the thing. just saying something doesn't mean you are surprised by it, and people talking about something usually have paid attention to it before the moment you are encountering them. this is pointless hostility to people who are saying something you supposedly agree with so much that you think everyone should already believe it - "it's always been like this": slightly different than above. unless someone is saying "this is literally new and nothing like this has happened before" or you are adding actual historical context that you know for sure they don't already know, you're basically saying "hey did you know this thing you care enough about to be paying attention to and talking about frequently has happened before now as well." this is so easy to frame in a way that says "yes and" rather than "i assume you dont know about the things i know about due to being very smart." eg. "dang not again, they keep doing
{thing}" - "
{thing}might be bad, but{alternative/unrelated, unmentioned, non-mutually exclusive thing}is even worse": multiple things can be bad at the same time and not mentioning something does not mean i don't think it's also bad - "funny how people who think
{thing}is bad also think{alternative/unrelated, unmentioned thing}is good": closely related to the above, just because you have binarized your thinking does not mean everyone else has.
anyway if the mental image you are conjuring for your interlocuters positions them as always knowing less than you by default, that might be something to look into in yourself!
- "why are you surprised"/"even worse than
-
If i can slip in a quick PSA while my typically sleepy notifications are exploding, these are all very annoying things to say and you might want to reconsider whether they're worth ever saying in a reply directed at someone else - who are they for? what do they add?
- "why are you surprised"/"even worse than
{thing}itself is people being surprised at{thing}": unless the person is saying "i am surprised by this" they are likely not surprised by the thing. just saying something doesn't mean you are surprised by it, and people talking about something usually have paid attention to it before the moment you are encountering them. this is pointless hostility to people who are saying something you supposedly agree with so much that you think everyone should already believe it - "it's always been like this": slightly different than above. unless someone is saying "this is literally new and nothing like this has happened before" or you are adding actual historical context that you know for sure they don't already know, you're basically saying "hey did you know this thing you care enough about to be paying attention to and talking about frequently has happened before now as well." this is so easy to frame in a way that says "yes and" rather than "i assume you dont know about the things i know about due to being very smart." eg. "dang not again, they keep doing
{thing}" - "
{thing}might be bad, but{alternative/unrelated, unmentioned, non-mutually exclusive thing}is even worse": multiple things can be bad at the same time and not mentioning something does not mean i don't think it's also bad - "funny how people who think
{thing}is bad also think{alternative/unrelated, unmentioned thing}is good": closely related to the above, just because you have binarized your thinking does not mean everyone else has.
anyway if the mental image you are conjuring for your interlocuters positions them as always knowing less than you by default, that might be something to look into in yourself!
i sort of love how LLM comments sometimes tell entire stories that nobody asked. claude code even has specific system prompt language for this, but they always end up making comments about what something used to do like "now we do x instead of y" like... ok? that is why i am reading current version of code!
so claude code is just not capable of rescuing itself from its own context - if an entry in its context window throws an error, it just keep throwing that error forever until you clear it. good stuff.
(and, of course we read the entire file before checking this, rather than just reading the first 5 bytes)

- "why are you surprised"/"even worse than
-
i sort of love how LLM comments sometimes tell entire stories that nobody asked. claude code even has specific system prompt language for this, but they always end up making comments about what something used to do like "now we do x instead of y" like... ok? that is why i am reading current version of code!
so claude code is just not capable of rescuing itself from its own context - if an entry in its context window throws an error, it just keep throwing that error forever until you clear it. good stuff.
(and, of course we read the entire file before checking this, rather than just reading the first 5 bytes)

this is super minor, and i've seen this in human code plenty of times, but this is the norm of this app verging on being formal code style.
so you have a file reading tool, you need to declare what kinds of file extensions it supports. that's very normal. claude code takes the interesting strategy of defining what extensions it doesn't read. that's also defensible, there are a zillion text extensions. i've seen strategies that just read an initial range of bytes and see if some proportion of them are ascii or unicode.
where does this get declared? why of course in as many places as there are rules.
hasBinaryExtension()comes fromconstants/files.ts,isPDFExtension()comes fromutils/pdfUtils.ts(which checks if the file extension is a member of the set{'pdf'}), andIMAGE_EXTENSIONSis declared in theFileReadTool.tsfile.of course, elsewhere we also have
IMAGE_EXTENSION_REGEXfromutils/imagePaste(sometimes used directly, other times with its wrapperisImageFilePath),TEXT_FILE_EXTENSIONSinutils/claudemd.ts. and we also have many inlined mime type lists and sets. and all of these somehow manage to implement the check differently. so rather than having, for example, agetFileType()function, we have both exactly the same and kinda the same logic redone in place every time it is done, which is hundreds of times. but that's none of my business, that's just how code works now and i need to get with the times.
-
this is super minor, and i've seen this in human code plenty of times, but this is the norm of this app verging on being formal code style.
so you have a file reading tool, you need to declare what kinds of file extensions it supports. that's very normal. claude code takes the interesting strategy of defining what extensions it doesn't read. that's also defensible, there are a zillion text extensions. i've seen strategies that just read an initial range of bytes and see if some proportion of them are ascii or unicode.
where does this get declared? why of course in as many places as there are rules.
hasBinaryExtension()comes fromconstants/files.ts,isPDFExtension()comes fromutils/pdfUtils.ts(which checks if the file extension is a member of the set{'pdf'}), andIMAGE_EXTENSIONSis declared in theFileReadTool.tsfile.of course, elsewhere we also have
IMAGE_EXTENSION_REGEXfromutils/imagePaste(sometimes used directly, other times with its wrapperisImageFilePath),TEXT_FILE_EXTENSIONSinutils/claudemd.ts. and we also have many inlined mime type lists and sets. and all of these somehow manage to implement the check differently. so rather than having, for example, agetFileType()function, we have both exactly the same and kinda the same logic redone in place every time it is done, which is hundreds of times. but that's none of my business, that's just how code works now and i need to get with the times.
i love this. there's a mechanism to slip secret messages to the LLM that it is told to interpret as system messages. there is no validation around these of any kind on the client, and there doesn't seem to be any differentiation about location or where these things happen, so that seems like a nice prompt injection vector. this is how claude code reminds the LLM to not do a malware, and it's applied by just string concatenation. i can't find any place that gets stripped aside from when displaying output. it actually looks like all the system reminders get catted together before being send to the API. neat!



