Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. now that i am... writing my own agentic LLM framework thing... because if you're going to have a shitposting IRC bot you may as well go completely overkill, i have Opinions on the state of the world.

now that i am... writing my own agentic LLM framework thing... because if you're going to have a shitposting IRC bot you may as well go completely overkill, i have Opinions on the state of the world.

Scheduled Pinned Locked Moved Uncategorized
51 Posts 18 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • mirth@mastodon.sdf.orgM mirth@mastodon.sdf.org

    @ariadne Having studied up a bit myself I can fill in a few pieces. Reasoning models just have been trained to chatter on in some kind of preamble that is intended to be hidden or de-emphasized in the UI, possibly wrapped in tags like <reasoning>blah blah blah</reasoning>, followed by a shorter answer. Mixture of experts is an orthogonal idea to structure the models so predictions can be run using only a in order to use less compute. Both ideas make models hard to train for different reasons.

    ariadne@social.treehouse.systemsA This user is from outside of this forum
    ariadne@social.treehouse.systemsA This user is from outside of this forum
    ariadne@social.treehouse.systems
    wrote last edited by
    #24

    @mirth sure, but the "thinking" ones do some consensus stuff to ensure it doesn't go off course

    mirth@mastodon.sdf.orgM 1 Reply Last reply
    0
    • ariadne@social.treehouse.systemsA ariadne@social.treehouse.systems

      @mirth sure, but the "thinking" ones do some consensus stuff to ensure it doesn't go off course

      mirth@mastodon.sdf.orgM This user is from outside of this forum
      mirth@mastodon.sdf.orgM This user is from outside of this forum
      mirth@mastodon.sdf.org
      wrote last edited by
      #25

      @ariadne Not at prediction time, they do another stage of training that works a bit differently but the resulting model is structurally identical to the input model. I think you're very right about the lack of defensibility though, if you wanted to catch up with the leading labs in a year or two you could probably do it with around $200M and the charisma to recruit the people who know how to do this stuff.

      mirth@mastodon.sdf.orgM ariadne@social.treehouse.systemsA 2 Replies Last reply
      0
      • mirth@mastodon.sdf.orgM mirth@mastodon.sdf.org

        @ariadne Not at prediction time, they do another stage of training that works a bit differently but the resulting model is structurally identical to the input model. I think you're very right about the lack of defensibility though, if you wanted to catch up with the leading labs in a year or two you could probably do it with around $200M and the charisma to recruit the people who know how to do this stuff.

        mirth@mastodon.sdf.orgM This user is from outside of this forum
        mirth@mastodon.sdf.orgM This user is from outside of this forum
        mirth@mastodon.sdf.org
        wrote last edited by
        #26

        @ariadne I should say by "catch up" I mean to get to parity, my impression is the model research is kind of like drug development where a lot of the cost is paying for all the experiments that don't work, as a result it's much easier to catch up than to get out "ahead" whatever that means. Setting aside the ethical issues, the functional issue of how to effectively use plausible-sounding crap generators as part of reliable software systems remains unsolved.

        ariadne@social.treehouse.systemsA P 2 Replies Last reply
        0
        • ariadne@social.treehouse.systemsA ariadne@social.treehouse.systems

          now that i am... writing my own agentic LLM framework thing... because if you're going to have a shitposting IRC bot you may as well go completely overkill, i have Opinions on the state of the world.

          openclaw, especially, seems to be hot garbage, actually, because i was able to teach my LLM (which i trained from scratch on the highest quality artisanal IRC logs, 2003 to present, so i can assure you it is not a very good LLM) to use tools in the context of my own framework quite easily.

          mcrees@mastodon.boiler.socialM This user is from outside of this forum
          mcrees@mastodon.boiler.socialM This user is from outside of this forum
          mcrees@mastodon.boiler.social
          wrote last edited by
          #27

          @ariadne where can I connect to talk to this LLM. I want to see if it retained some vintage IRC memes

          1 Reply Last reply
          0
          • ariadne@social.treehouse.systemsA ariadne@social.treehouse.systems

            first of all, when i began i was quite skeptical on commercial AI.

            this exercise has only made me more skeptical, for a few reasons:

            first: you actually can hit the "good enough" point for text prediction with very little data. 80GB of low-quality (but ethically sourced from $HOME/logs) training data yielded a bot that can compose english and french prose reasonably well. if i additionally trained it on a creative commons licensed source like a wikipedia dump, it would probably be *way* more than enough. i don't have the compute power to do that though.

            second: reasoning models seem to largely be "mixture of experts" which are just more LLMs bolted on to each other. there's some cool consensus stuff going on, but that's all there is. this could possibly be considered a form of "thinking" in the framing of minsky's society of mind, but i don't think there is enough here that i would want to invest in companies doing this long term.

            third: from my own experiences teaching my LLM how to use tools, i can tell you that claude code and openai codex are just chatbots with a really well-written system prompt backed by a "mixture of experts" model. it is like that one scene where neo unlocks god mode in the matrix, i see how all this bullshit works now. (there is still a lot i do not know about the specifics, but i'm a person who works on the fuzzy side of things so it does not matter).

            fourth: i built my own LLM with a threadripper, some IRC logs gathered from various hard drives, a $10k GPU, a look at the qwen3 training scripts (i have Opinions on py3-transformers) and few days of training. it is pretty capable of generating plausible text. what is the big intellectual property asset that OpenAI has that the little guys can't duplicate? if i can do it in my condo, a startup can certainly compete with OpenAI.

            given these things, I really just don't understand how it is justifiable for all of this AI stuff to be some double-digit % of global GDP.

            if anything, i just have stronger conviction in that now.

            dngrs@chaos.socialD This user is from outside of this forum
            dngrs@chaos.socialD This user is from outside of this forum
            dngrs@chaos.social
            wrote last edited by
            #28

            @ariadne heck, even a Markov chain can be a decent shitposter. With what I know now about tf-idf (being ignorant about this was a major roadblock for calculating relevance) I'm really tempted to resurrect my python IRC atrocity from 2004 or so

            ariadne@social.treehouse.systemsA 1 Reply Last reply
            0
            • dngrs@chaos.socialD dngrs@chaos.social

              @ariadne heck, even a Markov chain can be a decent shitposter. With what I know now about tf-idf (being ignorant about this was a major roadblock for calculating relevance) I'm really tempted to resurrect my python IRC atrocity from 2004 or so

              ariadne@social.treehouse.systemsA This user is from outside of this forum
              ariadne@social.treehouse.systemsA This user is from outside of this forum
              ariadne@social.treehouse.systems
              wrote last edited by
              #29

              @dngrs I wanted something cooler than a Markov bot, and was already researching SLM (small language model, e.g. language strictly as I/O) technology for a Siri-like thing anyway.

              1 Reply Last reply
              0
              • mirth@mastodon.sdf.orgM mirth@mastodon.sdf.org

                @ariadne I should say by "catch up" I mean to get to parity, my impression is the model research is kind of like drug development where a lot of the cost is paying for all the experiments that don't work, as a result it's much easier to catch up than to get out "ahead" whatever that means. Setting aside the ethical issues, the functional issue of how to effectively use plausible-sounding crap generators as part of reliable software systems remains unsolved.

                ariadne@social.treehouse.systemsA This user is from outside of this forum
                ariadne@social.treehouse.systemsA This user is from outside of this forum
                ariadne@social.treehouse.systems
                wrote last edited by
                #30

                @mirth the question is why compete with them at all? it has same energy as the unix wars. large, proprietary models that lock people in. I would rather see a world of small, modular libre models that anyone with a weekend and a GPU can reproduce.

                mirth@mastodon.sdf.orgM 1 Reply Last reply
                0
                • mirth@mastodon.sdf.orgM mirth@mastodon.sdf.org

                  @ariadne Not at prediction time, they do another stage of training that works a bit differently but the resulting model is structurally identical to the input model. I think you're very right about the lack of defensibility though, if you wanted to catch up with the leading labs in a year or two you could probably do it with around $200M and the charisma to recruit the people who know how to do this stuff.

                  ariadne@social.treehouse.systemsA This user is from outside of this forum
                  ariadne@social.treehouse.systemsA This user is from outside of this forum
                  ariadne@social.treehouse.systems
                  wrote last edited by
                  #31

                  @mirth interesting. what I've built is a modular pipeline which takes language input, converts it into structured data, enriches that structured data with other relevant information, processes the final query into a plan (which is also structured data) and then uses that plan to formulate a response

                  mirth@mastodon.sdf.orgM 1 Reply Last reply
                  0
                  • ariadne@social.treehouse.systemsA ariadne@social.treehouse.systems

                    @mirth the question is why compete with them at all? it has same energy as the unix wars. large, proprietary models that lock people in. I would rather see a world of small, modular libre models that anyone with a weekend and a GPU can reproduce.

                    mirth@mastodon.sdf.orgM This user is from outside of this forum
                    mirth@mastodon.sdf.orgM This user is from outside of this forum
                    mirth@mastodon.sdf.org
                    wrote last edited by
                    #32

                    @ariadne To me it's a question of sufficient output quality, the strongest models available just barely function enough to do a little bit of general purpose instructed information processing unreliably. That will improve over time but the current stuff is very early.

                    The reason I'm a bit skeptical of a proliferation of weekend-sized models is that that size sacrifices the key ingredient enabling the whole LLM craze: the magical-looking ability to run plain language instructions.

                    ariadne@social.treehouse.systemsA 1 Reply Last reply
                    0
                    • mirth@mastodon.sdf.orgM mirth@mastodon.sdf.org

                      @ariadne To me it's a question of sufficient output quality, the strongest models available just barely function enough to do a little bit of general purpose instructed information processing unreliably. That will improve over time but the current stuff is very early.

                      The reason I'm a bit skeptical of a proliferation of weekend-sized models is that that size sacrifices the key ingredient enabling the whole LLM craze: the magical-looking ability to run plain language instructions.

                      ariadne@social.treehouse.systemsA This user is from outside of this forum
                      ariadne@social.treehouse.systemsA This user is from outside of this forum
                      ariadne@social.treehouse.systems
                      wrote last edited by
                      #33

                      @mirth i mean, i don't think that necessarily holds *if* you have the ability to build whatever you need with legos.

                      in many cases simply translating natural language to a specification for an expert system is enough

                      ariadne@social.treehouse.systemsA pixx@merveilles.townP 2 Replies Last reply
                      0
                      • ariadne@social.treehouse.systemsA ariadne@social.treehouse.systems

                        @mirth interesting. what I've built is a modular pipeline which takes language input, converts it into structured data, enriches that structured data with other relevant information, processes the final query into a plan (which is also structured data) and then uses that plan to formulate a response

                        mirth@mastodon.sdf.orgM This user is from outside of this forum
                        mirth@mastodon.sdf.orgM This user is from outside of this forum
                        mirth@mastodon.sdf.org
                        wrote last edited by
                        #34

                        @ariadne I'm not sure if there's a common name in the research but I think that kind of multi-step system that put the whole gloopy mess of linear algebra on some kind of rails is inevitably going to be necessary to make these things reliable. Even the smartest and most highly trained human specialists still rely on lookup tables and checklists and so forth to do their jobs.

                        mirth@mastodon.sdf.orgM 1 Reply Last reply
                        0
                        • ariadne@social.treehouse.systemsA ariadne@social.treehouse.systems

                          @mirth i mean, i don't think that necessarily holds *if* you have the ability to build whatever you need with legos.

                          in many cases simply translating natural language to a specification for an expert system is enough

                          ariadne@social.treehouse.systemsA This user is from outside of this forum
                          ariadne@social.treehouse.systemsA This user is from outside of this forum
                          ariadne@social.treehouse.systems
                          wrote last edited by
                          #35

                          @mirth back in the earlier AI wars, these were called "expert systems"

                          my idea is basically SLMs for I/O with other small models and tools governed by a user-generated expert system

                          mirth@mastodon.sdf.orgM 1 Reply Last reply
                          0
                          • mirth@mastodon.sdf.orgM mirth@mastodon.sdf.org

                            @ariadne I'm not sure if there's a common name in the research but I think that kind of multi-step system that put the whole gloopy mess of linear algebra on some kind of rails is inevitably going to be necessary to make these things reliable. Even the smartest and most highly trained human specialists still rely on lookup tables and checklists and so forth to do their jobs.

                            mirth@mastodon.sdf.orgM This user is from outside of this forum
                            mirth@mastodon.sdf.orgM This user is from outside of this forum
                            mirth@mastodon.sdf.org
                            wrote last edited by
                            #36

                            @ariadne Going back to "reasoning" models, they are generally trained with reinforcement learning towards some goal rather than pure supervised prediction. What the biggest labs do is somewhat secret sauce but a technique called "GRPO" was made famous by DeepSeek and I think it or something much like it is what's used to post-train models to code and so forth.

                            Link Preview Image
                            Post Training Qwen3 for Math Reasoning Using GRPO - PyImageSearch

                            Fine-tuning Qwen3 for advanced math reasoning using GRPO: boosting precision, structure, and problem-solving accuracy post-training.

                            favicon

                            PyImageSearch (pyimagesearch.com)

                            1 Reply Last reply
                            0
                            • ariadne@social.treehouse.systemsA ariadne@social.treehouse.systems

                              now that i am... writing my own agentic LLM framework thing... because if you're going to have a shitposting IRC bot you may as well go completely overkill, i have Opinions on the state of the world.

                              openclaw, especially, seems to be hot garbage, actually, because i was able to teach my LLM (which i trained from scratch on the highest quality artisanal IRC logs, 2003 to present, so i can assure you it is not a very good LLM) to use tools in the context of my own framework quite easily.

                              pixx@merveilles.townP This user is from outside of this forum
                              pixx@merveilles.townP This user is from outside of this forum
                              pixx@merveilles.town
                              wrote last edited by
                              #37

                              @ariadne
                              I'm kinda wondering if only using your logs is actually an advantage.

                              I'm sure there's dumb stuff in there but you've filtered out _so much_ of the dumbness on the internet that it might actually be a step up

                              1 Reply Last reply
                              0
                              • ariadne@social.treehouse.systemsA ariadne@social.treehouse.systems

                                @mirth back in the earlier AI wars, these were called "expert systems"

                                my idea is basically SLMs for I/O with other small models and tools governed by a user-generated expert system

                                mirth@mastodon.sdf.orgM This user is from outside of this forum
                                mirth@mastodon.sdf.orgM This user is from outside of this forum
                                mirth@mastodon.sdf.org
                                wrote last edited by
                                #38

                                @ariadne I think there's a lot of merit to that idea although I don't understand how to build it. As models get more powerful the harnesses required to make them write coherent code or whatever aren't getting any simpler, so I think that's a strong argument for the "small pieces in a structured formation" kind of arrangement. Big LLMs have the attracting property that a user can start with a small description and see something happen right away, I wonder how to replicate that.

                                1 Reply Last reply
                                0
                                • mirth@mastodon.sdf.orgM mirth@mastodon.sdf.org

                                  @ariadne I should say by "catch up" I mean to get to parity, my impression is the model research is kind of like drug development where a lot of the cost is paying for all the experiments that don't work, as a result it's much easier to catch up than to get out "ahead" whatever that means. Setting aside the ethical issues, the functional issue of how to effectively use plausible-sounding crap generators as part of reliable software systems remains unsolved.

                                  P This user is from outside of this forum
                                  P This user is from outside of this forum
                                  pinskia@hachyderm.io
                                  wrote last edited by
                                  #39

                                  @mirth @ariadne This here explains why the US companies are so upset with China here.

                                  ariadne@social.treehouse.systemsA 1 Reply Last reply
                                  0
                                  • P pinskia@hachyderm.io

                                    @mirth @ariadne This here explains why the US companies are so upset with China here.

                                    ariadne@social.treehouse.systemsA This user is from outside of this forum
                                    ariadne@social.treehouse.systemsA This user is from outside of this forum
                                    ariadne@social.treehouse.systems
                                    wrote last edited by
                                    #40

                                    @pinskia @mirth yep they broke the illusion.

                                    IMO the real reason OpenAI reserved all of this RAM and shit is to prevent competitors from buying it

                                    jannem@fosstodon.orgJ 1 Reply Last reply
                                    0
                                    • ariadne@social.treehouse.systemsA ariadne@social.treehouse.systems

                                      @pinskia @mirth yep they broke the illusion.

                                      IMO the real reason OpenAI reserved all of this RAM and shit is to prevent competitors from buying it

                                      jannem@fosstodon.orgJ This user is from outside of this forum
                                      jannem@fosstodon.orgJ This user is from outside of this forum
                                      jannem@fosstodon.org
                                      wrote last edited by
                                      #41

                                      @ariadne @pinskia @mirth
                                      What they are doing is forcing competitors to do more with less. Smaller models with a clever architecture, not huge monoliths trained by brute force. Might come back to bite them sooner or later.

                                      I'd like to see more hybrid models, where the LLM largely sticks to being the language module, and other models (possibly not even NN) specialize in other functions.

                                      ariadne@social.treehouse.systemsA 1 Reply Last reply
                                      0
                                      • jannem@fosstodon.orgJ jannem@fosstodon.org

                                        @ariadne @pinskia @mirth
                                        What they are doing is forcing competitors to do more with less. Smaller models with a clever architecture, not huge monoliths trained by brute force. Might come back to bite them sooner or later.

                                        I'd like to see more hybrid models, where the LLM largely sticks to being the language module, and other models (possibly not even NN) specialize in other functions.

                                        ariadne@social.treehouse.systemsA This user is from outside of this forum
                                        ariadne@social.treehouse.systemsA This user is from outside of this forum
                                        ariadne@social.treehouse.systems
                                        wrote last edited by
                                        #42

                                        @jannem @pinskia @mirth yes, this is what i eventually want to build. a set of libre building blocks for building ethical, libre and personal agentic systems that are self-contained.

                                        the shit Big AI is doing is not interesting to me, but SLMs and other specialized neural models legitimately provide a useful set of tools to have in the toolbox.

                                        today, however, I just want to prove the ideas out by shitposting in IRC 😉

                                        ariadne@social.treehouse.systemsA 1 Reply Last reply
                                        0
                                        • ariadne@social.treehouse.systemsA ariadne@social.treehouse.systems

                                          @jannem @pinskia @mirth yes, this is what i eventually want to build. a set of libre building blocks for building ethical, libre and personal agentic systems that are self-contained.

                                          the shit Big AI is doing is not interesting to me, but SLMs and other specialized neural models legitimately provide a useful set of tools to have in the toolbox.

                                          today, however, I just want to prove the ideas out by shitposting in IRC 😉

                                          ariadne@social.treehouse.systemsA This user is from outside of this forum
                                          ariadne@social.treehouse.systemsA This user is from outside of this forum
                                          ariadne@social.treehouse.systems
                                          wrote last edited by
                                          #43

                                          @jannem @pinskia @mirth that said, i think that OpenAI and other hardware/resource hoarders need to be called out on the fact that they don't need all of this to ship product

                                          there really is no need to destroy the climate or make professional GPUs cost as much as a recent vintage used car

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups