now that i am... writing my own agentic LLM framework thing... because if you're going to have a shitposting IRC bot you may as well go completely overkill, i have Opinions on the state of the world.
-
first of all, when i began i was quite skeptical on commercial AI.
this exercise has only made me more skeptical, for a few reasons:
first: you actually can hit the "good enough" point for text prediction with very little data. 80GB of low-quality (but ethically sourced from $HOME/logs) training data yielded a bot that can compose english and french prose reasonably well. if i additionally trained it on a creative commons licensed source like a wikipedia dump, it would probably be *way* more than enough. i don't have the compute power to do that though.
second: reasoning models seem to largely be "mixture of experts" which are just more LLMs bolted on to each other. there's some cool consensus stuff going on, but that's all there is. this could possibly be considered a form of "thinking" in the framing of minsky's society of mind, but i don't think there is enough here that i would want to invest in companies doing this long term.
third: from my own experiences teaching my LLM how to use tools, i can tell you that claude code and openai codex are just chatbots with a really well-written system prompt backed by a "mixture of experts" model. it is like that one scene where neo unlocks god mode in the matrix, i see how all this bullshit works now. (there is still a lot i do not know about the specifics, but i'm a person who works on the fuzzy side of things so it does not matter).
fourth: i built my own LLM with a threadripper, some IRC logs gathered from various hard drives, a $10k GPU, a look at the qwen3 training scripts (i have Opinions on py3-transformers) and few days of training. it is pretty capable of generating plausible text. what is the big intellectual property asset that OpenAI has that the little guys can't duplicate? if i can do it in my condo, a startup can certainly compete with OpenAI.
given these things, I really just don't understand how it is justifiable for all of this AI stuff to be some double-digit % of global GDP.
if anything, i just have stronger conviction in that now.
@ariadne Having studied up a bit myself I can fill in a few pieces. Reasoning models just have been trained to chatter on in some kind of preamble that is intended to be hidden or de-emphasized in the UI, possibly wrapped in tags like <reasoning>blah blah blah</reasoning>, followed by a shorter answer. Mixture of experts is an orthogonal idea to structure the models so predictions can be run using only a in order to use less compute. Both ideas make models hard to train for different reasons.
-
@ariadne Having studied up a bit myself I can fill in a few pieces. Reasoning models just have been trained to chatter on in some kind of preamble that is intended to be hidden or de-emphasized in the UI, possibly wrapped in tags like <reasoning>blah blah blah</reasoning>, followed by a shorter answer. Mixture of experts is an orthogonal idea to structure the models so predictions can be run using only a in order to use less compute. Both ideas make models hard to train for different reasons.
@mirth sure, but the "thinking" ones do some consensus stuff to ensure it doesn't go off course
-
@mirth sure, but the "thinking" ones do some consensus stuff to ensure it doesn't go off course
@ariadne Not at prediction time, they do another stage of training that works a bit differently but the resulting model is structurally identical to the input model. I think you're very right about the lack of defensibility though, if you wanted to catch up with the leading labs in a year or two you could probably do it with around $200M and the charisma to recruit the people who know how to do this stuff.
-
@ariadne Not at prediction time, they do another stage of training that works a bit differently but the resulting model is structurally identical to the input model. I think you're very right about the lack of defensibility though, if you wanted to catch up with the leading labs in a year or two you could probably do it with around $200M and the charisma to recruit the people who know how to do this stuff.
@ariadne I should say by "catch up" I mean to get to parity, my impression is the model research is kind of like drug development where a lot of the cost is paying for all the experiments that don't work, as a result it's much easier to catch up than to get out "ahead" whatever that means. Setting aside the ethical issues, the functional issue of how to effectively use plausible-sounding crap generators as part of reliable software systems remains unsolved.
-
now that i am... writing my own agentic LLM framework thing... because if you're going to have a shitposting IRC bot you may as well go completely overkill, i have Opinions on the state of the world.
openclaw, especially, seems to be hot garbage, actually, because i was able to teach my LLM (which i trained from scratch on the highest quality artisanal IRC logs, 2003 to present, so i can assure you it is not a very good LLM) to use tools in the context of my own framework quite easily.
@ariadne where can I connect to talk to this LLM. I want to see if it retained some vintage IRC memes
-
first of all, when i began i was quite skeptical on commercial AI.
this exercise has only made me more skeptical, for a few reasons:
first: you actually can hit the "good enough" point for text prediction with very little data. 80GB of low-quality (but ethically sourced from $HOME/logs) training data yielded a bot that can compose english and french prose reasonably well. if i additionally trained it on a creative commons licensed source like a wikipedia dump, it would probably be *way* more than enough. i don't have the compute power to do that though.
second: reasoning models seem to largely be "mixture of experts" which are just more LLMs bolted on to each other. there's some cool consensus stuff going on, but that's all there is. this could possibly be considered a form of "thinking" in the framing of minsky's society of mind, but i don't think there is enough here that i would want to invest in companies doing this long term.
third: from my own experiences teaching my LLM how to use tools, i can tell you that claude code and openai codex are just chatbots with a really well-written system prompt backed by a "mixture of experts" model. it is like that one scene where neo unlocks god mode in the matrix, i see how all this bullshit works now. (there is still a lot i do not know about the specifics, but i'm a person who works on the fuzzy side of things so it does not matter).
fourth: i built my own LLM with a threadripper, some IRC logs gathered from various hard drives, a $10k GPU, a look at the qwen3 training scripts (i have Opinions on py3-transformers) and few days of training. it is pretty capable of generating plausible text. what is the big intellectual property asset that OpenAI has that the little guys can't duplicate? if i can do it in my condo, a startup can certainly compete with OpenAI.
given these things, I really just don't understand how it is justifiable for all of this AI stuff to be some double-digit % of global GDP.
if anything, i just have stronger conviction in that now.
@ariadne heck, even a Markov chain can be a decent shitposter. With what I know now about tf-idf (being ignorant about this was a major roadblock for calculating relevance) I'm really tempted to resurrect my python IRC atrocity from 2004 or so
-
@ariadne heck, even a Markov chain can be a decent shitposter. With what I know now about tf-idf (being ignorant about this was a major roadblock for calculating relevance) I'm really tempted to resurrect my python IRC atrocity from 2004 or so
@dngrs I wanted something cooler than a Markov bot, and was already researching SLM (small language model, e.g. language strictly as I/O) technology for a Siri-like thing anyway.
-
@ariadne I should say by "catch up" I mean to get to parity, my impression is the model research is kind of like drug development where a lot of the cost is paying for all the experiments that don't work, as a result it's much easier to catch up than to get out "ahead" whatever that means. Setting aside the ethical issues, the functional issue of how to effectively use plausible-sounding crap generators as part of reliable software systems remains unsolved.
@mirth the question is why compete with them at all? it has same energy as the unix wars. large, proprietary models that lock people in. I would rather see a world of small, modular libre models that anyone with a weekend and a GPU can reproduce.
-
@ariadne Not at prediction time, they do another stage of training that works a bit differently but the resulting model is structurally identical to the input model. I think you're very right about the lack of defensibility though, if you wanted to catch up with the leading labs in a year or two you could probably do it with around $200M and the charisma to recruit the people who know how to do this stuff.
@mirth interesting. what I've built is a modular pipeline which takes language input, converts it into structured data, enriches that structured data with other relevant information, processes the final query into a plan (which is also structured data) and then uses that plan to formulate a response
-
@mirth the question is why compete with them at all? it has same energy as the unix wars. large, proprietary models that lock people in. I would rather see a world of small, modular libre models that anyone with a weekend and a GPU can reproduce.
@ariadne To me it's a question of sufficient output quality, the strongest models available just barely function enough to do a little bit of general purpose instructed information processing unreliably. That will improve over time but the current stuff is very early.
The reason I'm a bit skeptical of a proliferation of weekend-sized models is that that size sacrifices the key ingredient enabling the whole LLM craze: the magical-looking ability to run plain language instructions.
-
@ariadne To me it's a question of sufficient output quality, the strongest models available just barely function enough to do a little bit of general purpose instructed information processing unreliably. That will improve over time but the current stuff is very early.
The reason I'm a bit skeptical of a proliferation of weekend-sized models is that that size sacrifices the key ingredient enabling the whole LLM craze: the magical-looking ability to run plain language instructions.
@mirth i mean, i don't think that necessarily holds *if* you have the ability to build whatever you need with legos.
in many cases simply translating natural language to a specification for an expert system is enough
-
@mirth interesting. what I've built is a modular pipeline which takes language input, converts it into structured data, enriches that structured data with other relevant information, processes the final query into a plan (which is also structured data) and then uses that plan to formulate a response
@ariadne I'm not sure if there's a common name in the research but I think that kind of multi-step system that put the whole gloopy mess of linear algebra on some kind of rails is inevitably going to be necessary to make these things reliable. Even the smartest and most highly trained human specialists still rely on lookup tables and checklists and so forth to do their jobs.
-
@mirth i mean, i don't think that necessarily holds *if* you have the ability to build whatever you need with legos.
in many cases simply translating natural language to a specification for an expert system is enough
@mirth back in the earlier AI wars, these were called "expert systems"
my idea is basically SLMs for I/O with other small models and tools governed by a user-generated expert system
-
@ariadne I'm not sure if there's a common name in the research but I think that kind of multi-step system that put the whole gloopy mess of linear algebra on some kind of rails is inevitably going to be necessary to make these things reliable. Even the smartest and most highly trained human specialists still rely on lookup tables and checklists and so forth to do their jobs.
@ariadne Going back to "reasoning" models, they are generally trained with reinforcement learning towards some goal rather than pure supervised prediction. What the biggest labs do is somewhat secret sauce but a technique called "GRPO" was made famous by DeepSeek and I think it or something much like it is what's used to post-train models to code and so forth.
Post Training Qwen3 for Math Reasoning Using GRPO - PyImageSearch
Fine-tuning Qwen3 for advanced math reasoning using GRPO: boosting precision, structure, and problem-solving accuracy post-training.
PyImageSearch (pyimagesearch.com)
-
now that i am... writing my own agentic LLM framework thing... because if you're going to have a shitposting IRC bot you may as well go completely overkill, i have Opinions on the state of the world.
openclaw, especially, seems to be hot garbage, actually, because i was able to teach my LLM (which i trained from scratch on the highest quality artisanal IRC logs, 2003 to present, so i can assure you it is not a very good LLM) to use tools in the context of my own framework quite easily.
@ariadne
I'm kinda wondering if only using your logs is actually an advantage.I'm sure there's dumb stuff in there but you've filtered out _so much_ of the dumbness on the internet that it might actually be a step up
-
@mirth back in the earlier AI wars, these were called "expert systems"
my idea is basically SLMs for I/O with other small models and tools governed by a user-generated expert system
@ariadne I think there's a lot of merit to that idea although I don't understand how to build it. As models get more powerful the harnesses required to make them write coherent code or whatever aren't getting any simpler, so I think that's a strong argument for the "small pieces in a structured formation" kind of arrangement. Big LLMs have the attracting property that a user can start with a small description and see something happen right away, I wonder how to replicate that.
-
@ariadne I should say by "catch up" I mean to get to parity, my impression is the model research is kind of like drug development where a lot of the cost is paying for all the experiments that don't work, as a result it's much easier to catch up than to get out "ahead" whatever that means. Setting aside the ethical issues, the functional issue of how to effectively use plausible-sounding crap generators as part of reliable software systems remains unsolved.
-
-
@ariadne @pinskia @mirth
What they are doing is forcing competitors to do more with less. Smaller models with a clever architecture, not huge monoliths trained by brute force. Might come back to bite them sooner or later.I'd like to see more hybrid models, where the LLM largely sticks to being the language module, and other models (possibly not even NN) specialize in other functions.
-
@ariadne @pinskia @mirth
What they are doing is forcing competitors to do more with less. Smaller models with a clever architecture, not huge monoliths trained by brute force. Might come back to bite them sooner or later.I'd like to see more hybrid models, where the LLM largely sticks to being the language module, and other models (possibly not even NN) specialize in other functions.
@jannem @pinskia @mirth yes, this is what i eventually want to build. a set of libre building blocks for building ethical, libre and personal agentic systems that are self-contained.
the shit Big AI is doing is not interesting to me, but SLMs and other specialized neural models legitimately provide a useful set of tools to have in the toolbox.
today, however, I just want to prove the ideas out by shitposting in IRC
