[jacking off motion] great π
-
Well, I've already spotted mistakes, so let's have clod chew on it some more with directions to unfuck it
I really resent the way this CLI handles permissions. For editing a single file, you have a choice between "allow this specific edit operation this one time" and "allow all file operations for the rest of this session on any files"
A "you can touch *that* file as much as you like" option seems like an obvious thing to add if you give a shit about limiting the blast radius of letting the model use external tools, but I guess that's a bridge too far huh
@SnoopJ IT, understanding consent, we've been here

-
@SnoopJ IT, understanding consent, we've been here

@arrjay sucks here
-
no wonder Enthusiasts end up nuking their shit, I wouldn't want to babysit the thing accepting each atomic operation as they come either, with how slow this process is
and the only other alternative is "fuck my shit up as much as you want"
Verdict: a more powerful model is capable of doing the dumb parts of the document concatenation, but nursing it through making fine edits for that concatenation to make sense is worst than just editing the document by hand.
So, I don't think it's very good at this kind of paperwork, either. Maybe for a document nobody cares about.
-
@glyph yes, I mean this specific output
-
-
Verdict: a more powerful model is capable of doing the dumb parts of the document concatenation, but nursing it through making fine edits for that concatenation to make sense is worst than just editing the document by hand.
So, I don't think it's very good at this kind of paperwork, either. Maybe for a document nobody cares about.
and doing so took the model ~336,500 tokens
For reference, the final merged document is about 20 KB of text, so conservatively about 8 tokens per byte processed (assuming I started with 2x 20 KB docs which is overestimating)
Woof.
-
and doing so took the model ~336,500 tokens
For reference, the final merged document is about 20 KB of text, so conservatively about 8 tokens per byte processed (assuming I started with 2x 20 KB docs which is overestimating)
Woof.
@SnoopJ@hachyderm.io what's the cost of a token?
-
@SnoopJ@hachyderm.io what's the cost of a token?
@SnoopJ@hachyderm.io (note: this is a real question and not necessarily an Arrested Development "how much is one token, Michael? ten dollars?" reference, but it's not not that, as well)
-
and doing so took the model ~336,500 tokens
For reference, the final merged document is about 20 KB of text, so conservatively about 8 tokens per byte processed (assuming I started with 2x 20 KB docs which is overestimating)
Woof.
@SnoopJ Eye-wateringly inefficient, and even so it made mistakes?

-
no wonder Enthusiasts end up nuking their shit, I wouldn't want to babysit the thing accepting each atomic operation as they come either, with how slow this process is
and the only other alternative is "fuck my shit up as much as you want"
Gemini CLI (at least the one we use at work...) has the option to allow a certain command to proceed without permission (for example, it heavily relies on rg)
It also has a YOLO mode which is not encouraged LOL
-
@SnoopJ Eye-wateringly inefficient, and even so it made mistakes?

@Aradayn technologia!
-
@SnoopJ@hachyderm.io (note: this is a real question and not necessarily an Arrested Development "how much is one token, Michael? ten dollars?" reference, but it's not not that, as well)
@aud specific value depends on who you're asking and what day you're asking on. Fractions of a cent, though.
-
@aud specific value depends on who you're asking and what day you're asking on. Fractions of a cent, though.
@SnoopJ @aud
They're usually sold in batches of 1 million tokens and input token and output token may have different prices.
For example with OpenAI's GPT-5.2 model, 1M "Standard" input tokens cost $1.75 and 1M "Standard" output tokens cost $14.
Besides "standard" there's also Priority (more expensive) and flex and batch (both less expensive but probably less flexible or slower): https://developers.openai.com/api/docs/pricing/?latest-pricing=standard -
@SnoopJ @aud
They're usually sold in batches of 1 million tokens and input token and output token may have different prices.
For example with OpenAI's GPT-5.2 model, 1M "Standard" input tokens cost $1.75 and 1M "Standard" output tokens cost $14.
Besides "standard" there's also Priority (more expensive) and flex and batch (both less expensive but probably less flexible or slower): https://developers.openai.com/api/docs/pricing/?latest-pricing=standard@Doomed_Daniel @aud in this case Copilot is metering us per "premium request" anyway, so I think maybe they've just given up on tokens anyway
(perhaps because the current generation of """reasoning""" models uses such large numbers of tokens babbling to themselves and users would balk at such a price passed onto them)
-
@Doomed_Daniel @aud in this case Copilot is metering us per "premium request" anyway, so I think maybe they've just given up on tokens anyway
(perhaps because the current generation of """reasoning""" models uses such large numbers of tokens babbling to themselves and users would balk at such a price passed onto them)
@SnoopJ @Doomed_Daniel @aud Does the model looping on itself consume multiple requests or is a request "user gave input and recieved output"?
-
@SnoopJ @Doomed_Daniel @aud Does the model looping on itself consume multiple requests or is a request "user gave input and recieved output"?
@cthos @Doomed_Daniel @aud my understanding is that this would all fit into a single API request
-
and doing so took the model ~336,500 tokens
For reference, the final merged document is about 20 KB of text, so conservatively about 8 tokens per byte processed (assuming I started with 2x 20 KB docs which is overestimating)
Woof.
@SnoopJ Every now and then I'll watch a video from Nate B Jones who breathlessly extols the virtues of these slop generators and the latest news coming out of the various labs and "frontier models", usually coupled with exclamations about "the thing everyone's getting wrong" or "not talking enough about", etc. Then I come over to Mastodon and see the reality and wonder what world he's living in.
-
@cthos @Doomed_Daniel @aud my understanding is that this would all fit into a single API request
-
@Doomed_Daniel @SnoopJ @aud Which is funny in this case because they're also probably losing money on every request.
-
@Doomed_Daniel @SnoopJ @aud Which is funny in this case because they're also probably losing money on every request.
@cthos@mastodon.cthos.dev @SnoopJ@hachyderm.io @Doomed_Daniel@mastodon.gamedev.place I guess every oil fire smoke plume has a gold lining
checks earpiece oh, that's not gold, that's fire? huh...