Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it toLLMs: (enable that)Free software people: Oh no not like that

Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it toLLMs: (enable that)Free software people: Oh no not like that

Scheduled Pinned Locked Moved Uncategorized
194 Posts 82 Posters 15 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • mjg59@nondeterministic.computerM mjg59@nondeterministic.computer

    Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it to
    LLMs: (enable that)
    Free software people: Oh no not like that

    jordan@mastodon.subj.amJ This user is from outside of this forum
    jordan@mastodon.subj.amJ This user is from outside of this forum
    jordan@mastodon.subj.am
    wrote last edited by
    #168

    @mjg59 I think the issue is more on the forcing of LLMs/AI in *everything* right now, not specifically F/OSS projects. It reeks of dot-com bubble era marketing and in many cases is completely unnecessary.

    1 Reply Last reply
    0
    • mnl@hachyderm.ioM mnl@hachyderm.io

      @engideer @david_chisnall @mjg59 @ignaloidas I don’t think llms are “rando”. They have randomized elements during training and inference, but they’re not a random number generator. I also would trust a “rando” less than an expert in real life. I wouldn’t trust either blindly either.

      mnl@hachyderm.ioM This user is from outside of this forum
      mnl@hachyderm.ioM This user is from outside of this forum
      mnl@hachyderm.io
      wrote last edited by
      #169

      @engideer @david_chisnall @mjg59 @ignaloidas also I didn’t say anything of what you quoted, and I don’t know where you got it from.

      1 Reply Last reply
      0
      • mnl@hachyderm.ioM mnl@hachyderm.io

        @ignaloidas @mjg59 @david_chisnall @newhinton how did you gain your confidence? How can you call machine learning a bunch of dice? I try to study and build things everyday and yes I don’t trust my code at all, which I think is a healthy attitude to have? I am definitely not able to produce perfect code on the first try.

        ignaloidas@not.acu.ltI This user is from outside of this forum
        ignaloidas@not.acu.ltI This user is from outside of this forum
        ignaloidas@not.acu.lt
        wrote last edited by
        #170

        @mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @newhinton@troet.cafe through repeated checks and knowledge that humans are consistent.


        And like, really, you don't trust your code at all? I, for example, know that the code I wrote is not going to cheat by unit tests, not going to re-implement half of the things from scratch when I'm working on a small feature, nor will it randomly delete files. After working with people for a while, I can be fairly sure that the code they've written can be trusted to the same standards. LLMs can't be trusted with these things, and in fact have been documented to do all of these things.

        It is not a blind, absolute trust, but trust within reason. The fact that I have to explain this to you is honestly embarrassing.

        mnl@hachyderm.ioM 1 Reply Last reply
        0
        • petko@social.petko.meP petko@social.petko.me

          @mjg59 but wait, there's more

          What if you're not renowned security expert and open-source celebrity @mjg59 (that currently works at nvidia btw, profiting from the LLM boom, sorry) but just some guy trying to make ends meet doing some coding?...

          Now you get an LLM mandate from your company that comes with the implication that 'either you boost your productivity with 80% or we fire you and contract a cheap prompter in your place'...

          lasombra_br@mas.toL This user is from outside of this forum
          lasombra_br@mas.toL This user is from outside of this forum
          lasombra_br@mas.to
          wrote last edited by
          #171

          @petko @mjg59 You can see that there’s no care for any of that. It’s all “like LLMs? Good, go use it, it’s fun”. All your ethical believes go out of the window as soon as your company shares depend n the hype.

          1 Reply Last reply
          0
          • mnl@hachyderm.ioM mnl@hachyderm.io

            @engideer @david_chisnall @mjg59 @ignaloidas I don’t think llms are “rando”. They have randomized elements during training and inference, but they’re not a random number generator. I also would trust a “rando” less than an expert in real life. I wouldn’t trust either blindly either.

            ignaloidas@not.acu.ltI This user is from outside of this forum
            ignaloidas@not.acu.ltI This user is from outside of this forum
            ignaloidas@not.acu.lt
            wrote last edited by
            #172

            @mnl@hachyderm.io @engideer@tech.lgbt @david_chisnall@infosec.exchange @mjg59@nondeterministic.computer LLMs are very much a random number generators. The distribution is far, far from uniform, but the whole breakthrough of LLMs was the introduction of "temperature", quite literally random choices, to break them out of monotonous tendencies.

            mnl@hachyderm.ioM 1 Reply Last reply
            0
            • ignaloidas@not.acu.ltI ignaloidas@not.acu.lt

              @mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @newhinton@troet.cafe through repeated checks and knowledge that humans are consistent.


              And like, really, you don't trust your code at all? I, for example, know that the code I wrote is not going to cheat by unit tests, not going to re-implement half of the things from scratch when I'm working on a small feature, nor will it randomly delete files. After working with people for a while, I can be fairly sure that the code they've written can be trusted to the same standards. LLMs can't be trusted with these things, and in fact have been documented to do all of these things.

              It is not a blind, absolute trust, but trust within reason. The fact that I have to explain this to you is honestly embarrassing.

              mnl@hachyderm.ioM This user is from outside of this forum
              mnl@hachyderm.ioM This user is from outside of this forum
              mnl@hachyderm.io
              wrote last edited by
              #173

              @ignaloidas @mjg59 @david_chisnall @newhinton but “fairly sure” is not full trust. I can also be “fairly sure” that something works, but I’m not going to trust my judgment and instead will try to validate it and provide proper guardrails so that if it is misbehaving, it is at least contained. Some things will be just fine even if broken, some less and will make me invest me more of my time. I am not going to try to prove the kernel correct just because I am changing a css color. I don’t see how that is different with llms, and I use them everyday. If anything, they allow me to validate more.

              ignaloidas@not.acu.ltI 1 Reply Last reply
              0
              • mnl@hachyderm.ioM mnl@hachyderm.io

                @ignaloidas @mjg59 @david_chisnall @newhinton but “fairly sure” is not full trust. I can also be “fairly sure” that something works, but I’m not going to trust my judgment and instead will try to validate it and provide proper guardrails so that if it is misbehaving, it is at least contained. Some things will be just fine even if broken, some less and will make me invest me more of my time. I am not going to try to prove the kernel correct just because I am changing a css color. I don’t see how that is different with llms, and I use them everyday. If anything, they allow me to validate more.

                ignaloidas@not.acu.ltI This user is from outside of this forum
                ignaloidas@not.acu.ltI This user is from outside of this forum
                ignaloidas@not.acu.lt
                wrote last edited by
                #174

                @mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @newhinton@troet.cafe you are falling down the cryptocurrency fallacy, assuming that you cannot trust anyone and as such have to build stuff assuming everyone is looking to get one over you.

                This is tiresome, and I do not care to discuss with you on this any longer, if you cannot understand that there are levels between "no trust" and "absolute trust", there is nothing more to discuss.

                mnl@hachyderm.ioM 1 Reply Last reply
                0
                • ignaloidas@not.acu.ltI ignaloidas@not.acu.lt

                  @mnl@hachyderm.io @engideer@tech.lgbt @david_chisnall@infosec.exchange @mjg59@nondeterministic.computer LLMs are very much a random number generators. The distribution is far, far from uniform, but the whole breakthrough of LLMs was the introduction of "temperature", quite literally random choices, to break them out of monotonous tendencies.

                  mnl@hachyderm.ioM This user is from outside of this forum
                  mnl@hachyderm.ioM This user is from outside of this forum
                  mnl@hachyderm.io
                  wrote last edited by
                  #175

                  @ignaloidas @mjg59 @david_chisnall @engideer temperature based sampling is just one of the many sampling modalities. Nucleus sampling, top-k, frequency penalties, all of these introduce controlled randomness to improve the performance of llms as measured by a wide variety of benchmarks.

                  A random sampling of tokens would actually be uniformly distributed… and obviously grammatically correct sentences is a clear sign that we are not randomly sampling tokens.

                  Are we talking about the same thing?

                  ignaloidas@not.acu.ltI 1 Reply Last reply
                  0
                  • ignaloidas@not.acu.ltI ignaloidas@not.acu.lt

                    @mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @newhinton@troet.cafe you are falling down the cryptocurrency fallacy, assuming that you cannot trust anyone and as such have to build stuff assuming everyone is looking to get one over you.

                    This is tiresome, and I do not care to discuss with you on this any longer, if you cannot understand that there are levels between "no trust" and "absolute trust", there is nothing more to discuss.

                    mnl@hachyderm.ioM This user is from outside of this forum
                    mnl@hachyderm.ioM This user is from outside of this forum
                    mnl@hachyderm.io
                    wrote last edited by
                    #176

                    @ignaloidas @mjg59 @david_chisnall @newhinton I think you are misreading what I am saying. That is exactly what I am saying. I never fully trust my code, not a single line of it, partly because every line of my code usually requires billions of lines of code I haven’t written to run. I can apply methods and use my experience to trust it enough to run it.

                    1 Reply Last reply
                    0
                    • mnl@hachyderm.ioM mnl@hachyderm.io

                      @ignaloidas @mjg59 @david_chisnall @engideer temperature based sampling is just one of the many sampling modalities. Nucleus sampling, top-k, frequency penalties, all of these introduce controlled randomness to improve the performance of llms as measured by a wide variety of benchmarks.

                      A random sampling of tokens would actually be uniformly distributed… and obviously grammatically correct sentences is a clear sign that we are not randomly sampling tokens.

                      Are we talking about the same thing?

                      ignaloidas@not.acu.ltI This user is from outside of this forum
                      ignaloidas@not.acu.ltI This user is from outside of this forum
                      ignaloidas@not.acu.lt
                      wrote last edited by
                      #177

                      @mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @engideer@tech.lgbt the fact that something is random does not mean that it has a uniform distribution. "controlled randomness" is still randomness. Taking random points in a unit circle by taking two random numbers for distance and direction will not result in a uniform distribution, but it's still random.

                      like, do you even read what you're writing? I'm starting to understand why you don't trust the code you wrote

                      mnl@hachyderm.ioM 1 Reply Last reply
                      0
                      • ignaloidas@not.acu.ltI ignaloidas@not.acu.lt

                        @mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @engideer@tech.lgbt the fact that something is random does not mean that it has a uniform distribution. "controlled randomness" is still randomness. Taking random points in a unit circle by taking two random numbers for distance and direction will not result in a uniform distribution, but it's still random.

                        like, do you even read what you're writing? I'm starting to understand why you don't trust the code you wrote

                        mnl@hachyderm.ioM This user is from outside of this forum
                        mnl@hachyderm.ioM This user is from outside of this forum
                        mnl@hachyderm.io
                        wrote last edited by
                        #178

                        @ignaloidas @mjg59 @david_chisnall @engideer now you are talking about absolute trust. I do think we are indeed talking about different things. Do you use LLMs? Do you assign the same level of trust to qwen-3.6 than to gpt-2? because I do not, partly based on benchmarks, partly on personal experience, partly on my (admittedly perfunctory) theoretical understanding of its training and inference setup.

                        1 Reply Last reply
                        0
                        • mjg59@nondeterministic.computerM mjg59@nondeterministic.computer

                          Look, coders, we are not writers. There's no way to turn "increment this variable" into life changing prose. The creativity exists outside the code. It always has done and it always will do. Let it go.

                          jens@social.finkhaeuser.deJ This user is from outside of this forum
                          jens@social.finkhaeuser.deJ This user is from outside of this forum
                          jens@social.finkhaeuser.de
                          wrote last edited by
                          #179

                          @mjg59 Indeed.

                          This is why code generation is not a solution to the problem.

                          Which problem? People will phrase it differently, but the basic idea is to outsource *the hard part*, which is analysis and phrasing requirements to guide the LLM.

                          LLMs suck at dealing with shitty specs. They even suck at dealing with good specs. They even suck at dealing with specs they themselves suggested.

                          Link Preview Image
                          Outsourcing Thought Is Going Great

                          On AI generated test code, and how mind-bogglingly stupid that is.

                          favicon

                          Mad Ramblings of a Cyber Arcanist (finkhaeuser.de)

                          So using LLMs isn't solving the problem, which is that thinking is hard.

                          1 Reply Last reply
                          0
                          • petko@social.petko.meP petko@social.petko.me

                            @mjg59 but wait, there's more

                            What if you're not renowned security expert and open-source celebrity @mjg59 (that currently works at nvidia btw, profiting from the LLM boom, sorry) but just some guy trying to make ends meet doing some coding?...

                            Now you get an LLM mandate from your company that comes with the implication that 'either you boost your productivity with 80% or we fire you and contract a cheap prompter in your place'...

                            S This user is from outside of this forum
                            S This user is from outside of this forum
                            seanfurey@mas.to
                            wrote last edited by
                            #180

                            @petko @mjg59

                            If the cheap prompter can produce the same results, what are the arguments against this?

                            - copyright violation in the training material
                            - excessively high use of the world's resources for training and inference

                            If both of those were handled (that's a big if. Maybe someday, maybe not) what were the arguments be against choosing the cheap Proctor?

                            petko@social.petko.meP 1 Reply Last reply
                            0
                            • glyph@mastodon.socialG glyph@mastodon.social

                              @mjg59 you’re doing the thing where you’re romanticizing another profession by assuming the grass is greener. most writers are not novelists. most are writing pretty dry ad copy or instruction manuals or something, just like most programmers aren’t writing especially novel or beautiful algorithms (or, for that matter, video games where algorithmic processes evoke a feeling). you’re just confusing form and content here

                              glyph@mastodon.socialG This user is from outside of this forum
                              glyph@mastodon.socialG This user is from outside of this forum
                              glyph@mastodon.social
                              wrote last edited by
                              #181

                              @mjg59 and yeah, “not like that” is actually valid, it’s just “having standards”, when “like that” is plagiaristic and error-prone and unsustainable and ecologically damaging on a world-historic scale. you don’t have to cancel every ethical principle you have so you can make a button a color you like better, even if you don’t really know how to code. you can argue that this ethical calculus is *wrong* but it is very silly indeed to pretend it’s contradictory gibberish

                              mjg59@nondeterministic.computerM 1 Reply Last reply
                              0
                              • mjg59@nondeterministic.computerM mjg59@nondeterministic.computer

                                Look, coders, we are not writers. There's no way to turn "increment this variable" into life changing prose. The creativity exists outside the code. It always has done and it always will do. Let it go.

                                glyph@mastodon.socialG This user is from outside of this forum
                                glyph@mastodon.socialG This user is from outside of this forum
                                glyph@mastodon.social
                                wrote last edited by
                                #182

                                @mjg59 you’re doing the thing where you’re romanticizing another profession by assuming the grass is greener. most writers are not novelists. most are writing pretty dry ad copy or instruction manuals or something, just like most programmers aren’t writing especially novel or beautiful algorithms (or, for that matter, video games where algorithmic processes evoke a feeling). you’re just confusing form and content here

                                glyph@mastodon.socialG 1 Reply Last reply
                                0
                                • S seanfurey@mas.to

                                  @petko @mjg59

                                  If the cheap prompter can produce the same results, what are the arguments against this?

                                  - copyright violation in the training material
                                  - excessively high use of the world's resources for training and inference

                                  If both of those were handled (that's a big if. Maybe someday, maybe not) what were the arguments be against choosing the cheap Proctor?

                                  petko@social.petko.meP This user is from outside of this forum
                                  petko@social.petko.meP This user is from outside of this forum
                                  petko@social.petko.me
                                  wrote last edited by
                                  #183

                                  @seanfurey @mjg59 lmao. Assuming a total of 20 million software developers world-wide, what is the problem with firing 5-10 million people in the span of 1-2 years? You really can't think of any problem with this except the blatant copyright violations and disastrous environmental impact? Those are people my guy, they and their families need food, shelter, healthcare, and people can't just choose a new craft, let alone while competing with a couple of million in the same situation...

                                  1 Reply Last reply
                                  0
                                  • mjg59@nondeterministic.computerM mjg59@nondeterministic.computer

                                    Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it to
                                    LLMs: (enable that)
                                    Free software people: Oh no not like that

                                    tef@mastodon.socialT This user is from outside of this forum
                                    tef@mastodon.socialT This user is from outside of this forum
                                    tef@mastodon.social
                                    wrote last edited by
                                    #184

                                    @mjg59

                                    if i am honest the price of such, psychotic breaks, isn't worth the freedom of per request billing

                                    tef@mastodon.socialT 1 Reply Last reply
                                    0
                                    • tef@mastodon.socialT tef@mastodon.social

                                      @mjg59

                                      if i am honest the price of such, psychotic breaks, isn't worth the freedom of per request billing

                                      tef@mastodon.socialT This user is from outside of this forum
                                      tef@mastodon.socialT This user is from outside of this forum
                                      tef@mastodon.social
                                      wrote last edited by
                                      #185

                                      @mjg59 it is a fair criticism of free software that they haven't managed to meaningfully increase people's agency over the computer

                                      but it is a flight of fancy to suggest that extractive labor and outsourcing gives people that agency or control

                                      even before we get to the "software that kills teenagers" part of the faustian pact

                                      1 Reply Last reply
                                      1
                                      0
                                      • R relay@relay.infosec.exchange shared this topic
                                      • glyph@mastodon.socialG glyph@mastodon.social

                                        @mjg59 and yeah, “not like that” is actually valid, it’s just “having standards”, when “like that” is plagiaristic and error-prone and unsustainable and ecologically damaging on a world-historic scale. you don’t have to cancel every ethical principle you have so you can make a button a color you like better, even if you don’t really know how to code. you can argue that this ethical calculus is *wrong* but it is very silly indeed to pretend it’s contradictory gibberish

                                        mjg59@nondeterministic.computerM This user is from outside of this forum
                                        mjg59@nondeterministic.computerM This user is from outside of this forum
                                        mjg59@nondeterministic.computer
                                        wrote last edited by
                                        #186

                                        @glyph I think I've covered why the plagiarism bit feels less true to me for code than for other fields, and I don't think the error prone aspect of it matters for the cases I'm thinking of. The world burning and economic destruction and loss of human skills are certainly a consequence of how these things are currently deployed but it's not inherent (at least, not to anywhere near this scale), and having it be an immediate "no" rather than "Is there an ethical way to do this" feels rough

                                        glyph@mastodon.socialG 1 Reply Last reply
                                        0
                                        • mjg59@nondeterministic.computerM mjg59@nondeterministic.computer

                                          @glyph I think I've covered why the plagiarism bit feels less true to me for code than for other fields, and I don't think the error prone aspect of it matters for the cases I'm thinking of. The world burning and economic destruction and loss of human skills are certainly a consequence of how these things are currently deployed but it's not inherent (at least, not to anywhere near this scale), and having it be an immediate "no" rather than "Is there an ethical way to do this" feels rough

                                          glyph@mastodon.socialG This user is from outside of this forum
                                          glyph@mastodon.socialG This user is from outside of this forum
                                          glyph@mastodon.social
                                          wrote last edited by
                                          #187

                                          @mjg59 it sounds unconvincing to me. the plagiarism thing has to do with sustainability, not just aesthetics. software errors tend to be chaotic and compounding and thus you’d need strong edges to the sandbox where the agents were allowed to play, which we don’t have. and the “inherent”-ness is a red herring. it doesn’t matter if there’s a *pretend* version of this tech that is ethical, the real-life version we have has the problems it has, and I haven’t heard any plausible way to separate them

                                          glyph@mastodon.socialG 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups