Psst, want to see a funny GitHub issue?
-
Psst, want to see a funny GitHub issue? https://github.com/anomalyco/opencode/issues/18100
-
Psst, want to see a funny GitHub issue? https://github.com/anomalyco/opencode/issues/18100
If you have heard the buzzword "agentic AI" but avoided finding out what it meant until now:
1. Someone figured out an LLM can do JSON RPCs by typing out the JSON token by token.
2. The LLM is run in a harness that regexes out the JSON from its output and executes the RPC.
3. The response is catted into the LLM's context window, also in the form of JSON that the LLM just reads.
4. People connect these harnesses to system shells on their dev machines.
5. Fast forward, this is a trillion-dollar industry held together by markdown files asking the LLM to please not curlbash from the internet.
-
If you have heard the buzzword "agentic AI" but avoided finding out what it meant until now:
1. Someone figured out an LLM can do JSON RPCs by typing out the JSON token by token.
2. The LLM is run in a harness that regexes out the JSON from its output and executes the RPC.
3. The response is catted into the LLM's context window, also in the form of JSON that the LLM just reads.
4. People connect these harnesses to system shells on their dev machines.
5. Fast forward, this is a trillion-dollar industry held together by markdown files asking the LLM to please not curlbash from the internet.
@wren6991 There's long been a style of development that's basically "no problem cannot be solved be adding more layers (or libraries, or middleware, ...)" and "AI" development appears to be taking that "win by multiplying complexity until it works" and automating it.
-
If you have heard the buzzword "agentic AI" but avoided finding out what it meant until now:
1. Someone figured out an LLM can do JSON RPCs by typing out the JSON token by token.
2. The LLM is run in a harness that regexes out the JSON from its output and executes the RPC.
3. The response is catted into the LLM's context window, also in the form of JSON that the LLM just reads.
4. People connect these harnesses to system shells on their dev machines.
5. Fast forward, this is a trillion-dollar industry held together by markdown files asking the LLM to please not curlbash from the internet.
@wren6991 this absurdity is on-par with the "claude re-emits vaguely json-shaped output repeatedly until the linter says it's valid" discovery from the recent source leak
-
@wren6991 this absurdity is on-par with the "claude re-emits vaguely json-shaped output repeatedly until the linter says it's valid" discovery from the recent source leak
@astraleureka I don't know what Anthropic do but llama-cpp (open-source inference) apparently does masked decoding for tool calls. It recognises a magic token indicating the start of a tool call and from that point it forces probability to 0 for tokens that don't match an FSM for JSON syntax + tool call schema. This is done at inference level and might not be visible in the Claude leak, which afaik was just the harness.
So it's not quite as dumb as I made it sound because the LLM is constrained to only produce syntactically and schematically correct JSON during tool calls. It's still funny that it just... types the JSON though
-
@astraleureka I don't know what Anthropic do but llama-cpp (open-source inference) apparently does masked decoding for tool calls. It recognises a magic token indicating the start of a tool call and from that point it forces probability to 0 for tokens that don't match an FSM for JSON syntax + tool call schema. This is done at inference level and might not be visible in the Claude leak, which afaik was just the harness.
So it's not quite as dumb as I made it sound because the LLM is constrained to only produce syntactically and schematically correct JSON during tool calls. It's still funny that it just... types the JSON though
@astraleureka There's a little bit of info here: https://github.com/ggml-org/llama.cpp/blob/master/grammars/README.md
There is some plumbing to make this match whatever the model is post-trained to emit for tool calls. No idea where that is. The whole file format situation is absolutely fucked in general
-
@astraleureka There's a little bit of info here: https://github.com/ggml-org/llama.cpp/blob/master/grammars/README.md
There is some plumbing to make this match whatever the model is post-trained to emit for tool calls. No idea where that is. The whole file format situation is absolutely fucked in general
@wren6991 frankly, this is quite as dumb as you made it sound. "in-band signalling is bad" was a lesson learned by telecom developers more than 40 years ago before the advent of Modern Development as we know it. developers of all walks have understood this very basic concept after much blood, sweat and tears, and yet here we are with ANTHROPIC_MAGIC_STRING_TRIGGER_REFUSAL_1FAEFB6177B4672DEE07F9D3AFC62588CCD2631EDCF22E8CCC1FB35B501C9C86 and second-person prompts begging the model to Please Don't Do The Dangerous Thing as if it was self-aware
-
If you have heard the buzzword "agentic AI" but avoided finding out what it meant until now:
1. Someone figured out an LLM can do JSON RPCs by typing out the JSON token by token.
2. The LLM is run in a harness that regexes out the JSON from its output and executes the RPC.
3. The response is catted into the LLM's context window, also in the form of JSON that the LLM just reads.
4. People connect these harnesses to system shells on their dev machines.
5. Fast forward, this is a trillion-dollar industry held together by markdown files asking the LLM to please not curlbash from the internet.
@wren6991 and they want it to fly fighterjets and guide bombs too. wheeee
-
@wren6991 frankly, this is quite as dumb as you made it sound. "in-band signalling is bad" was a lesson learned by telecom developers more than 40 years ago before the advent of Modern Development as we know it. developers of all walks have understood this very basic concept after much blood, sweat and tears, and yet here we are with ANTHROPIC_MAGIC_STRING_TRIGGER_REFUSAL_1FAEFB6177B4672DEE07F9D3AFC62588CCD2631EDCF22E8CCC1FB35B501C9C86 and second-person prompts begging the model to Please Don't Do The Dangerous Thing as if it was self-aware
@astraleureka The analogy is messy because the tokens the model emits aren't the same thing as the character strings they're converted to/from. Like the <|think|> tag for initiating chain-of-thought is a single token that exists for that purpose, and is not the same as the multiple tokens that would spell it out character-by-character. That <|think|> token is out-of-band in the same way as the comma and control symbols in 8b10b are out-of-band. Tool calls are the same.
The other problem you pointed out is probably the bigger one which is we took Turing machines, made the tape append-only, added associative lookups on the tape and poured the entire internet into them until they have anxiety, and the fact they appear to follow natural-language instructions most of the time is a coincidence. Having out-of-band control symbols is nice but there's no way to actually know or control when they're emitted.
-
Psst, want to see a funny GitHub issue? https://github.com/anomalyco/opencode/issues/18100
@wren6991 I am realizing that this issue was ALSO written by a llm and I feel dirty for reading it entirely.
-
If you have heard the buzzword "agentic AI" but avoided finding out what it meant until now:
1. Someone figured out an LLM can do JSON RPCs by typing out the JSON token by token.
2. The LLM is run in a harness that regexes out the JSON from its output and executes the RPC.
3. The response is catted into the LLM's context window, also in the form of JSON that the LLM just reads.
4. People connect these harnesses to system shells on their dev machines.
5. Fast forward, this is a trillion-dollar industry held together by markdown files asking the LLM to please not curlbash from the internet.
@wren6991 yup that is precisely how it works. been examining this with gemma 4 the past week. i put the control target in a docker image where it has root; it's useful for user testing, but i feel silly trying to make it do anything else.
-
Psst, want to see a funny GitHub issue? https://github.com/anomalyco/opencode/issues/18100
-
Psst, want to see a funny GitHub issue? https://github.com/anomalyco/opencode/issues/18100
@wren6991 "turd rules all the way down..."
-
If you have heard the buzzword "agentic AI" but avoided finding out what it meant until now:
1. Someone figured out an LLM can do JSON RPCs by typing out the JSON token by token.
2. The LLM is run in a harness that regexes out the JSON from its output and executes the RPC.
3. The response is catted into the LLM's context window, also in the form of JSON that the LLM just reads.
4. People connect these harnesses to system shells on their dev machines.
5. Fast forward, this is a trillion-dollar industry held together by markdown files asking the LLM to please not curlbash from the internet.
@wren6991 it has just occurred to me that, unlike JSON, natural language isn't completely whitespace insensitive. A carriage return is semantically different to a space. It must be tokenized with multiple tokens. So, while it would seem that JSON was very easy for an LLM to understand, it's trying to understand it with the whitespace in. Wow. Mind you, that is how it would be able to make sense of python code. And minified javascript code would contain fewer tokens.
-
If you have heard the buzzword "agentic AI" but avoided finding out what it meant until now:
1. Someone figured out an LLM can do JSON RPCs by typing out the JSON token by token.
2. The LLM is run in a harness that regexes out the JSON from its output and executes the RPC.
3. The response is catted into the LLM's context window, also in the form of JSON that the LLM just reads.
4. People connect these harnesses to system shells on their dev machines.
5. Fast forward, this is a trillion-dollar industry held together by markdown files asking the LLM to please not curlbash from the internet.
@wren6991
Thanks, that helps. "Harness" is the cool word for "wrapper script", right? -
System shared this topic