now that i am... writing my own agentic LLM framework thing... because if you're going to have a shitposting IRC bot you may as well go completely overkill, i have Opinions on the state of the world.
-
now that i am... writing my own agentic LLM framework thing... because if you're going to have a shitposting IRC bot you may as well go completely overkill, i have Opinions on the state of the world.
openclaw, especially, seems to be hot garbage, actually, because i was able to teach my LLM (which i trained from scratch on the highest quality artisanal IRC logs, 2003 to present, so i can assure you it is not a very good LLM) to use tools in the context of my own framework quite easily.
@ariadne many years ago, I trained a Markov model on a decade or two of my IRC utterances to see if I could get it to replace me.
Now I'm realizing I could have described that as an early AI agent and run off with a huge pile of VC money.
-
now that i am... writing my own agentic LLM framework thing... because if you're going to have a shitposting IRC bot you may as well go completely overkill, i have Opinions on the state of the world.
openclaw, especially, seems to be hot garbage, actually, because i was able to teach my LLM (which i trained from scratch on the highest quality artisanal IRC logs, 2003 to present, so i can assure you it is not a very good LLM) to use tools in the context of my own framework quite easily.
@ariadne They are all quite bad and not really production-ready. Maybe support Docker at the minimum, but of course local volume mounts with mutable files. But imagine if it could scale workloads in Kubernetes, save to a database and use S3 storage.
-
@ariadne A shitpost bot trained on IRC logs?
Holy fucking shit you found a valid use for "AI".
@thomholwerda i trained it from scratch, this is peak IRC
-
now that i am... writing my own agentic LLM framework thing... because if you're going to have a shitposting IRC bot you may as well go completely overkill, i have Opinions on the state of the world.
openclaw, especially, seems to be hot garbage, actually, because i was able to teach my LLM (which i trained from scratch on the highest quality artisanal IRC logs, 2003 to present, so i can assure you it is not a very good LLM) to use tools in the context of my own framework quite easily.
@ariadne Did you pull in a tool use data set to fine tune on, or was this accomplished entirely through prompting? I've always been interested in how lean the models can get.
-
@ariadne Did you pull in a tool use data set to fine tune on, or was this accomplished entirely through prompting? I've always been interested in how lean the models can get.
@dvshkn i generated a bunch of examples of valid and invalid JSON document fragments and then prompted it with "reply in JSON" and then a spec on what it can do.
the hardest thing has been convincing it to shut the fuck up actually.
-
@dvshkn i generated a bunch of examples of valid and invalid JSON document fragments and then prompted it with "reply in JSON" and then a spec on what it can do.
the hardest thing has been convincing it to shut the fuck up actually.
@ariadne It might not be well received by everyone, but would read a blog post if you do write one
-
@thomholwerda i trained it from scratch, this is peak IRC
@ariadne If there are plans to make its... Musings available outside of IRC, I'm bookmarking that.
-
@ariadne If there are plans to make its... Musings available outside of IRC, I'm bookmarking that.
@thomholwerda i have no idea how to grant it the level of autonomy that would allow it to go full bcachefs
-
@ariadne It might not be well received by everyone, but would read a blog post if you do write one
@dvshkn *shrug* i think my opinions on commercial AI are well understood by now (namely that i am quite skeptical of it)
-
@dvshkn *shrug* i think my opinions on commercial AI are well understood by now (namely that i am quite skeptical of it)
@dvshkn and, if anything, this exercise has only made me *more* skeptical
-
now that i am... writing my own agentic LLM framework thing... because if you're going to have a shitposting IRC bot you may as well go completely overkill, i have Opinions on the state of the world.
openclaw, especially, seems to be hot garbage, actually, because i was able to teach my LLM (which i trained from scratch on the highest quality artisanal IRC logs, 2003 to present, so i can assure you it is not a very good LLM) to use tools in the context of my own framework quite easily.
@ariadne I have suspected this but never possessed the patience (and possibly the skill) to actually implement it. props
-
@thomholwerda i have no idea how to grant it the level of autonomy that would allow it to go full bcachefs
@ariadne The world is not ready for that.
-
now that i am... writing my own agentic LLM framework thing... because if you're going to have a shitposting IRC bot you may as well go completely overkill, i have Opinions on the state of the world.
openclaw, especially, seems to be hot garbage, actually, because i was able to teach my LLM (which i trained from scratch on the highest quality artisanal IRC logs, 2003 to present, so i can assure you it is not a very good LLM) to use tools in the context of my own framework quite easily.
first of all, when i began i was quite skeptical on commercial AI.
this exercise has only made me more skeptical, for a few reasons:
first: you actually can hit the "good enough" point for text prediction with very little data. 80GB of low-quality (but ethically sourced from $HOME/logs) training data yielded a bot that can compose english and french prose reasonably well. if i additionally trained it on a creative commons licensed source like a wikipedia dump, it would probably be *way* more than enough. i don't have the compute power to do that though.
second: reasoning models seem to largely be "mixture of experts" which are just more LLMs bolted on to each other. there's some cool consensus stuff going on, but that's all there is. this could possibly be considered a form of "thinking" in the framing of minsky's society of mind, but i don't think there is enough here that i would want to invest in companies doing this long term.
third: from my own experiences teaching my LLM how to use tools, i can tell you that claude code and openai codex are just chatbots with a really well-written system prompt backed by a "mixture of experts" model. it is like that one scene where neo unlocks god mode in the matrix, i see how all this bullshit works now. (there is still a lot i do not know about the specifics, but i'm a person who works on the fuzzy side of things so it does not matter).
fourth: i built my own LLM with a threadripper, some IRC logs gathered from various hard drives, a $10k GPU, a look at the qwen3 training scripts (i have Opinions on py3-transformers) and few days of training. it is pretty capable of generating plausible text. what is the big intellectual property asset that OpenAI has that the little guys can't duplicate? if i can do it in my condo, a startup can certainly compete with OpenAI.
given these things, I really just don't understand how it is justifiable for all of this AI stuff to be some double-digit % of global GDP.
if anything, i just have stronger conviction in that now.
-
first of all, when i began i was quite skeptical on commercial AI.
this exercise has only made me more skeptical, for a few reasons:
first: you actually can hit the "good enough" point for text prediction with very little data. 80GB of low-quality (but ethically sourced from $HOME/logs) training data yielded a bot that can compose english and french prose reasonably well. if i additionally trained it on a creative commons licensed source like a wikipedia dump, it would probably be *way* more than enough. i don't have the compute power to do that though.
second: reasoning models seem to largely be "mixture of experts" which are just more LLMs bolted on to each other. there's some cool consensus stuff going on, but that's all there is. this could possibly be considered a form of "thinking" in the framing of minsky's society of mind, but i don't think there is enough here that i would want to invest in companies doing this long term.
third: from my own experiences teaching my LLM how to use tools, i can tell you that claude code and openai codex are just chatbots with a really well-written system prompt backed by a "mixture of experts" model. it is like that one scene where neo unlocks god mode in the matrix, i see how all this bullshit works now. (there is still a lot i do not know about the specifics, but i'm a person who works on the fuzzy side of things so it does not matter).
fourth: i built my own LLM with a threadripper, some IRC logs gathered from various hard drives, a $10k GPU, a look at the qwen3 training scripts (i have Opinions on py3-transformers) and few days of training. it is pretty capable of generating plausible text. what is the big intellectual property asset that OpenAI has that the little guys can't duplicate? if i can do it in my condo, a startup can certainly compete with OpenAI.
given these things, I really just don't understand how it is justifiable for all of this AI stuff to be some double-digit % of global GDP.
if anything, i just have stronger conviction in that now.
@ariadne it was never justifiable, but investors don't have your ability to just go play.
-
first of all, when i began i was quite skeptical on commercial AI.
this exercise has only made me more skeptical, for a few reasons:
first: you actually can hit the "good enough" point for text prediction with very little data. 80GB of low-quality (but ethically sourced from $HOME/logs) training data yielded a bot that can compose english and french prose reasonably well. if i additionally trained it on a creative commons licensed source like a wikipedia dump, it would probably be *way* more than enough. i don't have the compute power to do that though.
second: reasoning models seem to largely be "mixture of experts" which are just more LLMs bolted on to each other. there's some cool consensus stuff going on, but that's all there is. this could possibly be considered a form of "thinking" in the framing of minsky's society of mind, but i don't think there is enough here that i would want to invest in companies doing this long term.
third: from my own experiences teaching my LLM how to use tools, i can tell you that claude code and openai codex are just chatbots with a really well-written system prompt backed by a "mixture of experts" model. it is like that one scene where neo unlocks god mode in the matrix, i see how all this bullshit works now. (there is still a lot i do not know about the specifics, but i'm a person who works on the fuzzy side of things so it does not matter).
fourth: i built my own LLM with a threadripper, some IRC logs gathered from various hard drives, a $10k GPU, a look at the qwen3 training scripts (i have Opinions on py3-transformers) and few days of training. it is pretty capable of generating plausible text. what is the big intellectual property asset that OpenAI has that the little guys can't duplicate? if i can do it in my condo, a startup can certainly compete with OpenAI.
given these things, I really just don't understand how it is justifiable for all of this AI stuff to be some double-digit % of global GDP.
if anything, i just have stronger conviction in that now.
@ariadne I think your question in the fourth point is answered by your first point. A lot of the secret sauce is just hoarding compute.
-
now that i am... writing my own agentic LLM framework thing... because if you're going to have a shitposting IRC bot you may as well go completely overkill, i have Opinions on the state of the world.
openclaw, especially, seems to be hot garbage, actually, because i was able to teach my LLM (which i trained from scratch on the highest quality artisanal IRC logs, 2003 to present, so i can assure you it is not a very good LLM) to use tools in the context of my own framework quite easily.
@ariadne If you market it right*, you too can sell for a fuck ton of money to Meta.
* Shitposts better than any LLM on Moltbook

-
@ariadne I think your question in the fourth point is answered by your first point. A lot of the secret sauce is just hoarding compute.
@dvshkn oh i could do it if i wanted, it would just take months to years.
-
@dvshkn oh i could do it if i wanted, it would just take months to years.
@ariadne Yeah, you basically already answered it yourself, but China really destroyed the idea that there's some super secret training data that people can't get
-
first of all, when i began i was quite skeptical on commercial AI.
this exercise has only made me more skeptical, for a few reasons:
first: you actually can hit the "good enough" point for text prediction with very little data. 80GB of low-quality (but ethically sourced from $HOME/logs) training data yielded a bot that can compose english and french prose reasonably well. if i additionally trained it on a creative commons licensed source like a wikipedia dump, it would probably be *way* more than enough. i don't have the compute power to do that though.
second: reasoning models seem to largely be "mixture of experts" which are just more LLMs bolted on to each other. there's some cool consensus stuff going on, but that's all there is. this could possibly be considered a form of "thinking" in the framing of minsky's society of mind, but i don't think there is enough here that i would want to invest in companies doing this long term.
third: from my own experiences teaching my LLM how to use tools, i can tell you that claude code and openai codex are just chatbots with a really well-written system prompt backed by a "mixture of experts" model. it is like that one scene where neo unlocks god mode in the matrix, i see how all this bullshit works now. (there is still a lot i do not know about the specifics, but i'm a person who works on the fuzzy side of things so it does not matter).
fourth: i built my own LLM with a threadripper, some IRC logs gathered from various hard drives, a $10k GPU, a look at the qwen3 training scripts (i have Opinions on py3-transformers) and few days of training. it is pretty capable of generating plausible text. what is the big intellectual property asset that OpenAI has that the little guys can't duplicate? if i can do it in my condo, a startup can certainly compete with OpenAI.
given these things, I really just don't understand how it is justifiable for all of this AI stuff to be some double-digit % of global GDP.
if anything, i just have stronger conviction in that now.
@ariadne Having studied up a bit myself I can fill in a few pieces. Reasoning models just have been trained to chatter on in some kind of preamble that is intended to be hidden or de-emphasized in the UI, possibly wrapped in tags like <reasoning>blah blah blah</reasoning>, followed by a shorter answer. Mixture of experts is an orthogonal idea to structure the models so predictions can be run using only a in order to use less compute. Both ideas make models hard to train for different reasons.
-
@ariadne Having studied up a bit myself I can fill in a few pieces. Reasoning models just have been trained to chatter on in some kind of preamble that is intended to be hidden or de-emphasized in the UI, possibly wrapped in tags like <reasoning>blah blah blah</reasoning>, followed by a shorter answer. Mixture of experts is an orthogonal idea to structure the models so predictions can be run using only a in order to use less compute. Both ideas make models hard to train for different reasons.
@mirth sure, but the "thinking" ones do some consensus stuff to ensure it doesn't go off course