[jacking off motion] great π
-
and doing so took the model ~336,500 tokens
For reference, the final merged document is about 20 KB of text, so conservatively about 8 tokens per byte processed (assuming I started with 2x 20 KB docs which is overestimating)
Woof.
@SnoopJ Eye-wateringly inefficient, and even so it made mistakes?

-
no wonder Enthusiasts end up nuking their shit, I wouldn't want to babysit the thing accepting each atomic operation as they come either, with how slow this process is
and the only other alternative is "fuck my shit up as much as you want"
Gemini CLI (at least the one we use at work...) has the option to allow a certain command to proceed without permission (for example, it heavily relies on rg)
It also has a YOLO mode which is not encouraged LOL
-
@SnoopJ Eye-wateringly inefficient, and even so it made mistakes?

@Aradayn technologia!
-
@SnoopJ@hachyderm.io (note: this is a real question and not necessarily an Arrested Development "how much is one token, Michael? ten dollars?" reference, but it's not not that, as well)
@aud specific value depends on who you're asking and what day you're asking on. Fractions of a cent, though.
-
@aud specific value depends on who you're asking and what day you're asking on. Fractions of a cent, though.
@SnoopJ @aud
They're usually sold in batches of 1 million tokens and input token and output token may have different prices.
For example with OpenAI's GPT-5.2 model, 1M "Standard" input tokens cost $1.75 and 1M "Standard" output tokens cost $14.
Besides "standard" there's also Priority (more expensive) and flex and batch (both less expensive but probably less flexible or slower): https://developers.openai.com/api/docs/pricing/?latest-pricing=standard -
@SnoopJ @aud
They're usually sold in batches of 1 million tokens and input token and output token may have different prices.
For example with OpenAI's GPT-5.2 model, 1M "Standard" input tokens cost $1.75 and 1M "Standard" output tokens cost $14.
Besides "standard" there's also Priority (more expensive) and flex and batch (both less expensive but probably less flexible or slower): https://developers.openai.com/api/docs/pricing/?latest-pricing=standard@Doomed_Daniel @aud in this case Copilot is metering us per "premium request" anyway, so I think maybe they've just given up on tokens anyway
(perhaps because the current generation of """reasoning""" models uses such large numbers of tokens babbling to themselves and users would balk at such a price passed onto them)
-
@Doomed_Daniel @aud in this case Copilot is metering us per "premium request" anyway, so I think maybe they've just given up on tokens anyway
(perhaps because the current generation of """reasoning""" models uses such large numbers of tokens babbling to themselves and users would balk at such a price passed onto them)
@SnoopJ @Doomed_Daniel @aud Does the model looping on itself consume multiple requests or is a request "user gave input and recieved output"?
-
@SnoopJ @Doomed_Daniel @aud Does the model looping on itself consume multiple requests or is a request "user gave input and recieved output"?
@cthos @Doomed_Daniel @aud my understanding is that this would all fit into a single API request
-
and doing so took the model ~336,500 tokens
For reference, the final merged document is about 20 KB of text, so conservatively about 8 tokens per byte processed (assuming I started with 2x 20 KB docs which is overestimating)
Woof.
@SnoopJ Every now and then I'll watch a video from Nate B Jones who breathlessly extols the virtues of these slop generators and the latest news coming out of the various labs and "frontier models", usually coupled with exclamations about "the thing everyone's getting wrong" or "not talking enough about", etc. Then I come over to Mastodon and see the reality and wonder what world he's living in.
-
@cthos @Doomed_Daniel @aud my understanding is that this would all fit into a single API request
-
@Doomed_Daniel @SnoopJ @aud Which is funny in this case because they're also probably losing money on every request.
-
@Doomed_Daniel @SnoopJ @aud Which is funny in this case because they're also probably losing money on every request.
@cthos@mastodon.cthos.dev @SnoopJ@hachyderm.io @Doomed_Daniel@mastodon.gamedev.place I guess every oil fire smoke plume has a gold lining
checks earpiece oh, that's not gold, that's fire? huh... -
@Doomed_Daniel @SnoopJ @aud Which is funny in this case because they're also probably losing money on every request.
-
@Doomed_Daniel @cthos @aud I don't think it works that way but they aren't exactly open about what makes a request "premium".
I'm guessing that a user prompt causes a free request, and if the model reasons it wants to upnegotiate to premium billing, it can do so, or something like that?
It's all various shades of "creative accounting"
-
@Doomed_Daniel @SnoopJ @aud Which is funny in this case because they're also probably losing money on every request.
@cthos @Doomed_Daniel @aud there is no possible way they are coming anywhere close to breaking even
-
@Doomed_Daniel@mastodon.gamedev.place @SnoopJ@hachyderm.io @cthos@mastodon.cthos.dev "I will make our entire department dependent on this service offered by a monopoly!!! wait, why are our costs so high now
"
boggles the fucking mind that anyone thinks that is a good idea -
@Doomed_Daniel@mastodon.gamedev.place @SnoopJ@hachyderm.io @cthos@mastodon.cthos.dev "I will make our entire department dependent on this service offered by a monopoly!!! wait, why are our costs so high now
"
boggles the fucking mind that anyone thinks that is a good idea@SnoopJ@hachyderm.io @Doomed_Daniel@mastodon.gamedev.place @cthos@mastodon.cthos.dev "we were losing money on plumbing maintenance, right?"
"uh, I don't... think that we were losing money on tha-"
"so anyway, this company offered to deliver us water way below market price that the water company asks for!! so I ripped out all our pipes and these suckers are HAND BRINGING US WATER NOW."
"what's your plan for if they ever raise prices?"
"... huh?" -
@Doomed_Daniel@mastodon.gamedev.place @SnoopJ@hachyderm.io @cthos@mastodon.cthos.dev "I will make our entire department dependent on this service offered by a monopoly!!! wait, why are our costs so high now
"
boggles the fucking mind that anyone thinks that is a good idea -
@Doomed_Daniel @SnoopJ @aud So, yes, they are definitely trying to make themselves indespensible before their creditors come calling but unless there's some miraculous breakthrough or the "just accept shit sucks always" actually somehow makes them a profit ... they cannot raise their prices high enough to recoup the investment.
-
@Doomed_Daniel@mastodon.gamedev.place @SnoopJ@hachyderm.io @cthos@mastodon.cthos.dev companies when the engineer applying is a woman: "we only want the best, and I'm afraid you're just not it."
companies when the AI boom is happening: "it's obvious your work is unskilled and can be replaced with a machine, none of you are special."