Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. now that i am... writing my own agentic LLM framework thing... because if you're going to have a shitposting IRC bot you may as well go completely overkill, i have Opinions on the state of the world.

now that i am... writing my own agentic LLM framework thing... because if you're going to have a shitposting IRC bot you may as well go completely overkill, i have Opinions on the state of the world.

Scheduled Pinned Locked Moved Uncategorized
51 Posts 18 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • ariadne@social.treehouse.systemsA ariadne@social.treehouse.systems

    @thomholwerda i trained it from scratch, this is peak IRC

    thomholwerda@exquisite.socialT This user is from outside of this forum
    thomholwerda@exquisite.socialT This user is from outside of this forum
    thomholwerda@exquisite.social
    wrote last edited by
    #11

    @ariadne If there are plans to make its... Musings available outside of IRC, I'm bookmarking that.

    ariadne@social.treehouse.systemsA 1 Reply Last reply
    0
    • thomholwerda@exquisite.socialT thomholwerda@exquisite.social

      @ariadne If there are plans to make its... Musings available outside of IRC, I'm bookmarking that.

      ariadne@social.treehouse.systemsA This user is from outside of this forum
      ariadne@social.treehouse.systemsA This user is from outside of this forum
      ariadne@social.treehouse.systems
      wrote last edited by
      #12

      @thomholwerda i have no idea how to grant it the level of autonomy that would allow it to go full bcachefs

      thomholwerda@exquisite.socialT 1 Reply Last reply
      0
      • dvshkn@social.treehouse.systemsD dvshkn@social.treehouse.systems

        @ariadne It might not be well received by everyone, but would read a blog post if you do write one

        ariadne@social.treehouse.systemsA This user is from outside of this forum
        ariadne@social.treehouse.systemsA This user is from outside of this forum
        ariadne@social.treehouse.systems
        wrote last edited by
        #13

        @dvshkn *shrug* i think my opinions on commercial AI are well understood by now (namely that i am quite skeptical of it)

        ariadne@social.treehouse.systemsA 1 Reply Last reply
        0
        • ariadne@social.treehouse.systemsA ariadne@social.treehouse.systems

          @dvshkn *shrug* i think my opinions on commercial AI are well understood by now (namely that i am quite skeptical of it)

          ariadne@social.treehouse.systemsA This user is from outside of this forum
          ariadne@social.treehouse.systemsA This user is from outside of this forum
          ariadne@social.treehouse.systems
          wrote last edited by
          #14

          @dvshkn and, if anything, this exercise has only made me *more* skeptical

          1 Reply Last reply
          0
          • ariadne@social.treehouse.systemsA ariadne@social.treehouse.systems

            now that i am... writing my own agentic LLM framework thing... because if you're going to have a shitposting IRC bot you may as well go completely overkill, i have Opinions on the state of the world.

            openclaw, especially, seems to be hot garbage, actually, because i was able to teach my LLM (which i trained from scratch on the highest quality artisanal IRC logs, 2003 to present, so i can assure you it is not a very good LLM) to use tools in the context of my own framework quite easily.

            beaiouns@is.nota.liveB This user is from outside of this forum
            beaiouns@is.nota.liveB This user is from outside of this forum
            beaiouns@is.nota.live
            wrote last edited by
            #15

            @ariadne I have suspected this but never possessed the patience (and possibly the skill) to actually implement it. props

            1 Reply Last reply
            0
            • ariadne@social.treehouse.systemsA ariadne@social.treehouse.systems

              @thomholwerda i have no idea how to grant it the level of autonomy that would allow it to go full bcachefs

              thomholwerda@exquisite.socialT This user is from outside of this forum
              thomholwerda@exquisite.socialT This user is from outside of this forum
              thomholwerda@exquisite.social
              wrote last edited by
              #16

              @ariadne The world is not ready for that.

              1 Reply Last reply
              0
              • ariadne@social.treehouse.systemsA ariadne@social.treehouse.systems

                now that i am... writing my own agentic LLM framework thing... because if you're going to have a shitposting IRC bot you may as well go completely overkill, i have Opinions on the state of the world.

                openclaw, especially, seems to be hot garbage, actually, because i was able to teach my LLM (which i trained from scratch on the highest quality artisanal IRC logs, 2003 to present, so i can assure you it is not a very good LLM) to use tools in the context of my own framework quite easily.

                ariadne@social.treehouse.systemsA This user is from outside of this forum
                ariadne@social.treehouse.systemsA This user is from outside of this forum
                ariadne@social.treehouse.systems
                wrote last edited by
                #17

                first of all, when i began i was quite skeptical on commercial AI.

                this exercise has only made me more skeptical, for a few reasons:

                first: you actually can hit the "good enough" point for text prediction with very little data. 80GB of low-quality (but ethically sourced from $HOME/logs) training data yielded a bot that can compose english and french prose reasonably well. if i additionally trained it on a creative commons licensed source like a wikipedia dump, it would probably be *way* more than enough. i don't have the compute power to do that though.

                second: reasoning models seem to largely be "mixture of experts" which are just more LLMs bolted on to each other. there's some cool consensus stuff going on, but that's all there is. this could possibly be considered a form of "thinking" in the framing of minsky's society of mind, but i don't think there is enough here that i would want to invest in companies doing this long term.

                third: from my own experiences teaching my LLM how to use tools, i can tell you that claude code and openai codex are just chatbots with a really well-written system prompt backed by a "mixture of experts" model. it is like that one scene where neo unlocks god mode in the matrix, i see how all this bullshit works now. (there is still a lot i do not know about the specifics, but i'm a person who works on the fuzzy side of things so it does not matter).

                fourth: i built my own LLM with a threadripper, some IRC logs gathered from various hard drives, a $10k GPU, a look at the qwen3 training scripts (i have Opinions on py3-transformers) and few days of training. it is pretty capable of generating plausible text. what is the big intellectual property asset that OpenAI has that the little guys can't duplicate? if i can do it in my condo, a startup can certainly compete with OpenAI.

                given these things, I really just don't understand how it is justifiable for all of this AI stuff to be some double-digit % of global GDP.

                if anything, i just have stronger conviction in that now.

                dysfun@social.treehouse.systemsD dvshkn@social.treehouse.systemsD mirth@mastodon.sdf.orgM dngrs@chaos.socialD goakam@mastodon.socialG 7 Replies Last reply
                0
                • ariadne@social.treehouse.systemsA ariadne@social.treehouse.systems

                  first of all, when i began i was quite skeptical on commercial AI.

                  this exercise has only made me more skeptical, for a few reasons:

                  first: you actually can hit the "good enough" point for text prediction with very little data. 80GB of low-quality (but ethically sourced from $HOME/logs) training data yielded a bot that can compose english and french prose reasonably well. if i additionally trained it on a creative commons licensed source like a wikipedia dump, it would probably be *way* more than enough. i don't have the compute power to do that though.

                  second: reasoning models seem to largely be "mixture of experts" which are just more LLMs bolted on to each other. there's some cool consensus stuff going on, but that's all there is. this could possibly be considered a form of "thinking" in the framing of minsky's society of mind, but i don't think there is enough here that i would want to invest in companies doing this long term.

                  third: from my own experiences teaching my LLM how to use tools, i can tell you that claude code and openai codex are just chatbots with a really well-written system prompt backed by a "mixture of experts" model. it is like that one scene where neo unlocks god mode in the matrix, i see how all this bullshit works now. (there is still a lot i do not know about the specifics, but i'm a person who works on the fuzzy side of things so it does not matter).

                  fourth: i built my own LLM with a threadripper, some IRC logs gathered from various hard drives, a $10k GPU, a look at the qwen3 training scripts (i have Opinions on py3-transformers) and few days of training. it is pretty capable of generating plausible text. what is the big intellectual property asset that OpenAI has that the little guys can't duplicate? if i can do it in my condo, a startup can certainly compete with OpenAI.

                  given these things, I really just don't understand how it is justifiable for all of this AI stuff to be some double-digit % of global GDP.

                  if anything, i just have stronger conviction in that now.

                  dysfun@social.treehouse.systemsD This user is from outside of this forum
                  dysfun@social.treehouse.systemsD This user is from outside of this forum
                  dysfun@social.treehouse.systems
                  wrote last edited by
                  #18

                  @ariadne it was never justifiable, but investors don't have your ability to just go play.

                  1 Reply Last reply
                  0
                  • ariadne@social.treehouse.systemsA ariadne@social.treehouse.systems

                    first of all, when i began i was quite skeptical on commercial AI.

                    this exercise has only made me more skeptical, for a few reasons:

                    first: you actually can hit the "good enough" point for text prediction with very little data. 80GB of low-quality (but ethically sourced from $HOME/logs) training data yielded a bot that can compose english and french prose reasonably well. if i additionally trained it on a creative commons licensed source like a wikipedia dump, it would probably be *way* more than enough. i don't have the compute power to do that though.

                    second: reasoning models seem to largely be "mixture of experts" which are just more LLMs bolted on to each other. there's some cool consensus stuff going on, but that's all there is. this could possibly be considered a form of "thinking" in the framing of minsky's society of mind, but i don't think there is enough here that i would want to invest in companies doing this long term.

                    third: from my own experiences teaching my LLM how to use tools, i can tell you that claude code and openai codex are just chatbots with a really well-written system prompt backed by a "mixture of experts" model. it is like that one scene where neo unlocks god mode in the matrix, i see how all this bullshit works now. (there is still a lot i do not know about the specifics, but i'm a person who works on the fuzzy side of things so it does not matter).

                    fourth: i built my own LLM with a threadripper, some IRC logs gathered from various hard drives, a $10k GPU, a look at the qwen3 training scripts (i have Opinions on py3-transformers) and few days of training. it is pretty capable of generating plausible text. what is the big intellectual property asset that OpenAI has that the little guys can't duplicate? if i can do it in my condo, a startup can certainly compete with OpenAI.

                    given these things, I really just don't understand how it is justifiable for all of this AI stuff to be some double-digit % of global GDP.

                    if anything, i just have stronger conviction in that now.

                    dvshkn@social.treehouse.systemsD This user is from outside of this forum
                    dvshkn@social.treehouse.systemsD This user is from outside of this forum
                    dvshkn@social.treehouse.systems
                    wrote last edited by
                    #19

                    @ariadne I think your question in the fourth point is answered by your first point. A lot of the secret sauce is just hoarding compute.

                    ariadne@social.treehouse.systemsA 1 Reply Last reply
                    0
                    • ariadne@social.treehouse.systemsA ariadne@social.treehouse.systems

                      now that i am... writing my own agentic LLM framework thing... because if you're going to have a shitposting IRC bot you may as well go completely overkill, i have Opinions on the state of the world.

                      openclaw, especially, seems to be hot garbage, actually, because i was able to teach my LLM (which i trained from scratch on the highest quality artisanal IRC logs, 2003 to present, so i can assure you it is not a very good LLM) to use tools in the context of my own framework quite easily.

                      schrotthaufen@mastodon.socialS This user is from outside of this forum
                      schrotthaufen@mastodon.socialS This user is from outside of this forum
                      schrotthaufen@mastodon.social
                      wrote last edited by
                      #20

                      @ariadne If you market it right*, you too can sell for a fuck ton of money to Meta.

                      * Shitposts better than any LLM on Moltbook 🙊

                      1 Reply Last reply
                      0
                      • dvshkn@social.treehouse.systemsD dvshkn@social.treehouse.systems

                        @ariadne I think your question in the fourth point is answered by your first point. A lot of the secret sauce is just hoarding compute.

                        ariadne@social.treehouse.systemsA This user is from outside of this forum
                        ariadne@social.treehouse.systemsA This user is from outside of this forum
                        ariadne@social.treehouse.systems
                        wrote last edited by
                        #21

                        @dvshkn oh i could do it if i wanted, it would just take months to years.

                        dvshkn@social.treehouse.systemsD 1 Reply Last reply
                        0
                        • ariadne@social.treehouse.systemsA ariadne@social.treehouse.systems

                          @dvshkn oh i could do it if i wanted, it would just take months to years.

                          dvshkn@social.treehouse.systemsD This user is from outside of this forum
                          dvshkn@social.treehouse.systemsD This user is from outside of this forum
                          dvshkn@social.treehouse.systems
                          wrote last edited by
                          #22

                          @ariadne Yeah, you basically already answered it yourself, but China really destroyed the idea that there's some super secret training data that people can't get

                          1 Reply Last reply
                          0
                          • ariadne@social.treehouse.systemsA ariadne@social.treehouse.systems

                            first of all, when i began i was quite skeptical on commercial AI.

                            this exercise has only made me more skeptical, for a few reasons:

                            first: you actually can hit the "good enough" point for text prediction with very little data. 80GB of low-quality (but ethically sourced from $HOME/logs) training data yielded a bot that can compose english and french prose reasonably well. if i additionally trained it on a creative commons licensed source like a wikipedia dump, it would probably be *way* more than enough. i don't have the compute power to do that though.

                            second: reasoning models seem to largely be "mixture of experts" which are just more LLMs bolted on to each other. there's some cool consensus stuff going on, but that's all there is. this could possibly be considered a form of "thinking" in the framing of minsky's society of mind, but i don't think there is enough here that i would want to invest in companies doing this long term.

                            third: from my own experiences teaching my LLM how to use tools, i can tell you that claude code and openai codex are just chatbots with a really well-written system prompt backed by a "mixture of experts" model. it is like that one scene where neo unlocks god mode in the matrix, i see how all this bullshit works now. (there is still a lot i do not know about the specifics, but i'm a person who works on the fuzzy side of things so it does not matter).

                            fourth: i built my own LLM with a threadripper, some IRC logs gathered from various hard drives, a $10k GPU, a look at the qwen3 training scripts (i have Opinions on py3-transformers) and few days of training. it is pretty capable of generating plausible text. what is the big intellectual property asset that OpenAI has that the little guys can't duplicate? if i can do it in my condo, a startup can certainly compete with OpenAI.

                            given these things, I really just don't understand how it is justifiable for all of this AI stuff to be some double-digit % of global GDP.

                            if anything, i just have stronger conviction in that now.

                            mirth@mastodon.sdf.orgM This user is from outside of this forum
                            mirth@mastodon.sdf.orgM This user is from outside of this forum
                            mirth@mastodon.sdf.org
                            wrote last edited by
                            #23

                            @ariadne Having studied up a bit myself I can fill in a few pieces. Reasoning models just have been trained to chatter on in some kind of preamble that is intended to be hidden or de-emphasized in the UI, possibly wrapped in tags like <reasoning>blah blah blah</reasoning>, followed by a shorter answer. Mixture of experts is an orthogonal idea to structure the models so predictions can be run using only a in order to use less compute. Both ideas make models hard to train for different reasons.

                            ariadne@social.treehouse.systemsA 1 Reply Last reply
                            0
                            • mirth@mastodon.sdf.orgM mirth@mastodon.sdf.org

                              @ariadne Having studied up a bit myself I can fill in a few pieces. Reasoning models just have been trained to chatter on in some kind of preamble that is intended to be hidden or de-emphasized in the UI, possibly wrapped in tags like <reasoning>blah blah blah</reasoning>, followed by a shorter answer. Mixture of experts is an orthogonal idea to structure the models so predictions can be run using only a in order to use less compute. Both ideas make models hard to train for different reasons.

                              ariadne@social.treehouse.systemsA This user is from outside of this forum
                              ariadne@social.treehouse.systemsA This user is from outside of this forum
                              ariadne@social.treehouse.systems
                              wrote last edited by
                              #24

                              @mirth sure, but the "thinking" ones do some consensus stuff to ensure it doesn't go off course

                              mirth@mastodon.sdf.orgM 1 Reply Last reply
                              0
                              • ariadne@social.treehouse.systemsA ariadne@social.treehouse.systems

                                @mirth sure, but the "thinking" ones do some consensus stuff to ensure it doesn't go off course

                                mirth@mastodon.sdf.orgM This user is from outside of this forum
                                mirth@mastodon.sdf.orgM This user is from outside of this forum
                                mirth@mastodon.sdf.org
                                wrote last edited by
                                #25

                                @ariadne Not at prediction time, they do another stage of training that works a bit differently but the resulting model is structurally identical to the input model. I think you're very right about the lack of defensibility though, if you wanted to catch up with the leading labs in a year or two you could probably do it with around $200M and the charisma to recruit the people who know how to do this stuff.

                                mirth@mastodon.sdf.orgM ariadne@social.treehouse.systemsA 2 Replies Last reply
                                0
                                • mirth@mastodon.sdf.orgM mirth@mastodon.sdf.org

                                  @ariadne Not at prediction time, they do another stage of training that works a bit differently but the resulting model is structurally identical to the input model. I think you're very right about the lack of defensibility though, if you wanted to catch up with the leading labs in a year or two you could probably do it with around $200M and the charisma to recruit the people who know how to do this stuff.

                                  mirth@mastodon.sdf.orgM This user is from outside of this forum
                                  mirth@mastodon.sdf.orgM This user is from outside of this forum
                                  mirth@mastodon.sdf.org
                                  wrote last edited by
                                  #26

                                  @ariadne I should say by "catch up" I mean to get to parity, my impression is the model research is kind of like drug development where a lot of the cost is paying for all the experiments that don't work, as a result it's much easier to catch up than to get out "ahead" whatever that means. Setting aside the ethical issues, the functional issue of how to effectively use plausible-sounding crap generators as part of reliable software systems remains unsolved.

                                  ariadne@social.treehouse.systemsA P 2 Replies Last reply
                                  0
                                  • ariadne@social.treehouse.systemsA ariadne@social.treehouse.systems

                                    now that i am... writing my own agentic LLM framework thing... because if you're going to have a shitposting IRC bot you may as well go completely overkill, i have Opinions on the state of the world.

                                    openclaw, especially, seems to be hot garbage, actually, because i was able to teach my LLM (which i trained from scratch on the highest quality artisanal IRC logs, 2003 to present, so i can assure you it is not a very good LLM) to use tools in the context of my own framework quite easily.

                                    mcrees@mastodon.boiler.socialM This user is from outside of this forum
                                    mcrees@mastodon.boiler.socialM This user is from outside of this forum
                                    mcrees@mastodon.boiler.social
                                    wrote last edited by
                                    #27

                                    @ariadne where can I connect to talk to this LLM. I want to see if it retained some vintage IRC memes

                                    1 Reply Last reply
                                    0
                                    • ariadne@social.treehouse.systemsA ariadne@social.treehouse.systems

                                      first of all, when i began i was quite skeptical on commercial AI.

                                      this exercise has only made me more skeptical, for a few reasons:

                                      first: you actually can hit the "good enough" point for text prediction with very little data. 80GB of low-quality (but ethically sourced from $HOME/logs) training data yielded a bot that can compose english and french prose reasonably well. if i additionally trained it on a creative commons licensed source like a wikipedia dump, it would probably be *way* more than enough. i don't have the compute power to do that though.

                                      second: reasoning models seem to largely be "mixture of experts" which are just more LLMs bolted on to each other. there's some cool consensus stuff going on, but that's all there is. this could possibly be considered a form of "thinking" in the framing of minsky's society of mind, but i don't think there is enough here that i would want to invest in companies doing this long term.

                                      third: from my own experiences teaching my LLM how to use tools, i can tell you that claude code and openai codex are just chatbots with a really well-written system prompt backed by a "mixture of experts" model. it is like that one scene where neo unlocks god mode in the matrix, i see how all this bullshit works now. (there is still a lot i do not know about the specifics, but i'm a person who works on the fuzzy side of things so it does not matter).

                                      fourth: i built my own LLM with a threadripper, some IRC logs gathered from various hard drives, a $10k GPU, a look at the qwen3 training scripts (i have Opinions on py3-transformers) and few days of training. it is pretty capable of generating plausible text. what is the big intellectual property asset that OpenAI has that the little guys can't duplicate? if i can do it in my condo, a startup can certainly compete with OpenAI.

                                      given these things, I really just don't understand how it is justifiable for all of this AI stuff to be some double-digit % of global GDP.

                                      if anything, i just have stronger conviction in that now.

                                      dngrs@chaos.socialD This user is from outside of this forum
                                      dngrs@chaos.socialD This user is from outside of this forum
                                      dngrs@chaos.social
                                      wrote last edited by
                                      #28

                                      @ariadne heck, even a Markov chain can be a decent shitposter. With what I know now about tf-idf (being ignorant about this was a major roadblock for calculating relevance) I'm really tempted to resurrect my python IRC atrocity from 2004 or so

                                      ariadne@social.treehouse.systemsA 1 Reply Last reply
                                      0
                                      • dngrs@chaos.socialD dngrs@chaos.social

                                        @ariadne heck, even a Markov chain can be a decent shitposter. With what I know now about tf-idf (being ignorant about this was a major roadblock for calculating relevance) I'm really tempted to resurrect my python IRC atrocity from 2004 or so

                                        ariadne@social.treehouse.systemsA This user is from outside of this forum
                                        ariadne@social.treehouse.systemsA This user is from outside of this forum
                                        ariadne@social.treehouse.systems
                                        wrote last edited by
                                        #29

                                        @dngrs I wanted something cooler than a Markov bot, and was already researching SLM (small language model, e.g. language strictly as I/O) technology for a Siri-like thing anyway.

                                        1 Reply Last reply
                                        0
                                        • mirth@mastodon.sdf.orgM mirth@mastodon.sdf.org

                                          @ariadne I should say by "catch up" I mean to get to parity, my impression is the model research is kind of like drug development where a lot of the cost is paying for all the experiments that don't work, as a result it's much easier to catch up than to get out "ahead" whatever that means. Setting aside the ethical issues, the functional issue of how to effectively use plausible-sounding crap generators as part of reliable software systems remains unsolved.

                                          ariadne@social.treehouse.systemsA This user is from outside of this forum
                                          ariadne@social.treehouse.systemsA This user is from outside of this forum
                                          ariadne@social.treehouse.systems
                                          wrote last edited by
                                          #30

                                          @mirth the question is why compete with them at all? it has same energy as the unix wars. large, proprietary models that lock people in. I would rather see a world of small, modular libre models that anyone with a weekend and a GPU can reproduce.

                                          mirth@mastodon.sdf.orgM 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups