Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it toLLMs: (enable that)Free software people: Oh no not like that

Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it toLLMs: (enable that)Free software people: Oh no not like that

Scheduled Pinned Locked Moved Uncategorized
194 Posts 82 Posters 15 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • kyle@mastodon.kylerank.inK kyle@mastodon.kylerank.in

    @mjg59 You will get backlash, but you are right.

    Free software folks will have to decide whether what they really wanted was *everyone* to have the freedom to use and modify software, or only that subset of everyone who had the privilege of learning software development.

    There has always been this elitist dividing line in the community between people who contribute code, and people who contribute all the other things FOSS needs to thrive. Now those people can contribute code too.

    zachdecook@social.librem.oneZ This user is from outside of this forum
    zachdecook@social.librem.oneZ This user is from outside of this forum
    zachdecook@social.librem.one
    wrote last edited by
    #161

    @kyle @mjg59 Proprietary tooling is the reason "Stallman was right" about Bitkeeper, but "everyone was better off for having not listened to him" is the pragmatic side.
    Yes, I want people to benefit from the freedom to modify code, but they will never truly be free if they are using a proprietary LLM to make their modifications.

    1 Reply Last reply
    0
    • mnl@hachyderm.ioM mnl@hachyderm.io

      @david_chisnall @mjg59 @ignaloidas I have encountered plenty of people and books that were wrong, so I still have to engage my brain and double check, though.

      engideer@tech.lgbtE This user is from outside of this forum
      engideer@tech.lgbtE This user is from outside of this forum
      engideer@tech.lgbt
      wrote last edited by
      #162

      @mnl @david_chisnall @mjg59 @ignaloidas "Because people can be wrong, there's zero difference between asking an expert and a rando about a subject."

      That's essentially your position. I assume you also support RFK Jr. leading the HHS? After all, medical doctors can be wrong too!

      mnl@hachyderm.ioM 1 Reply Last reply
      0
      • chris_evelyn@fedi.chris-evelyn.deC chris_evelyn@fedi.chris-evelyn.de

        @mjg59 Yeah, as soon as there‘s an ethically sourced and trained free LLM that‘s not controlled by very shitty companies I‘m totally on board with you.

        Until then we shouldn’t let that shit near our projects.

        light@noc.socialL This user is from outside of this forum
        light@noc.socialL This user is from outside of this forum
        light@noc.social
        wrote last edited by
        #163

        @chris_evelyn
        What do you mean by "ethically sourced and trained"?
        @mjg59

        chris_evelyn@fedi.chris-evelyn.deC 1 Reply Last reply
        0
        • engideer@tech.lgbtE engideer@tech.lgbt

          @mnl @david_chisnall @mjg59 @ignaloidas "Because people can be wrong, there's zero difference between asking an expert and a rando about a subject."

          That's essentially your position. I assume you also support RFK Jr. leading the HHS? After all, medical doctors can be wrong too!

          mnl@hachyderm.ioM This user is from outside of this forum
          mnl@hachyderm.ioM This user is from outside of this forum
          mnl@hachyderm.io
          wrote last edited by
          #164

          @engideer @david_chisnall @mjg59 @ignaloidas I don’t think llms are “rando”. They have randomized elements during training and inference, but they’re not a random number generator. I also would trust a “rando” less than an expert in real life. I wouldn’t trust either blindly either.

          mnl@hachyderm.ioM ignaloidas@not.acu.ltI 2 Replies Last reply
          0
          • light@noc.socialL light@noc.social

            @chris_evelyn
            What do you mean by "ethically sourced and trained"?
            @mjg59

            chris_evelyn@fedi.chris-evelyn.deC This user is from outside of this forum
            chris_evelyn@fedi.chris-evelyn.deC This user is from outside of this forum
            chris_evelyn@fedi.chris-evelyn.de
            wrote last edited by
            #165

            @light At minimum that:

            • all input material is legit - either public domain or fairly paid for
            • all labeling/curating is done under good labor conditions

            @mjg59

            1 Reply Last reply
            0
            • mjg59@nondeterministic.computerM mjg59@nondeterministic.computer

              Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it to
              LLMs: (enable that)
              Free software people: Oh no not like that

              bazkie@beige.partyB This user is from outside of this forum
              bazkie@beige.partyB This user is from outside of this forum
              bazkie@beige.party
              wrote last edited by
              #166

              @mjg59 LLMs do not enable that at all tho? an LLM enables people to make software behave as they wish similarly to a crowbar enabling people to open a door

              1 Reply Last reply
              0
              • promovicz@chaos.socialP promovicz@chaos.social

                @mjg59 What you propose is actually illegal, even if the law doesn’t make much sense. I wonder if you ever had the cops sent after you on a corp-run IP case… maybe it would make you feel different?

                light@noc.socialL This user is from outside of this forum
                light@noc.socialL This user is from outside of this forum
                light@noc.social
                wrote last edited by
                #167

                @promovicz
                Let's hope the AI lobby will (in any combination of purposely and inadvertently) make that law obsolete.
                @mjg59

                1 Reply Last reply
                0
                • mjg59@nondeterministic.computerM mjg59@nondeterministic.computer

                  Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it to
                  LLMs: (enable that)
                  Free software people: Oh no not like that

                  jordan@mastodon.subj.amJ This user is from outside of this forum
                  jordan@mastodon.subj.amJ This user is from outside of this forum
                  jordan@mastodon.subj.am
                  wrote last edited by
                  #168

                  @mjg59 I think the issue is more on the forcing of LLMs/AI in *everything* right now, not specifically F/OSS projects. It reeks of dot-com bubble era marketing and in many cases is completely unnecessary.

                  1 Reply Last reply
                  0
                  • mnl@hachyderm.ioM mnl@hachyderm.io

                    @engideer @david_chisnall @mjg59 @ignaloidas I don’t think llms are “rando”. They have randomized elements during training and inference, but they’re not a random number generator. I also would trust a “rando” less than an expert in real life. I wouldn’t trust either blindly either.

                    mnl@hachyderm.ioM This user is from outside of this forum
                    mnl@hachyderm.ioM This user is from outside of this forum
                    mnl@hachyderm.io
                    wrote last edited by
                    #169

                    @engideer @david_chisnall @mjg59 @ignaloidas also I didn’t say anything of what you quoted, and I don’t know where you got it from.

                    1 Reply Last reply
                    0
                    • mnl@hachyderm.ioM mnl@hachyderm.io

                      @ignaloidas @mjg59 @david_chisnall @newhinton how did you gain your confidence? How can you call machine learning a bunch of dice? I try to study and build things everyday and yes I don’t trust my code at all, which I think is a healthy attitude to have? I am definitely not able to produce perfect code on the first try.

                      ignaloidas@not.acu.ltI This user is from outside of this forum
                      ignaloidas@not.acu.ltI This user is from outside of this forum
                      ignaloidas@not.acu.lt
                      wrote last edited by
                      #170

                      @mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @newhinton@troet.cafe through repeated checks and knowledge that humans are consistent.


                      And like, really, you don't trust your code at all? I, for example, know that the code I wrote is not going to cheat by unit tests, not going to re-implement half of the things from scratch when I'm working on a small feature, nor will it randomly delete files. After working with people for a while, I can be fairly sure that the code they've written can be trusted to the same standards. LLMs can't be trusted with these things, and in fact have been documented to do all of these things.

                      It is not a blind, absolute trust, but trust within reason. The fact that I have to explain this to you is honestly embarrassing.

                      mnl@hachyderm.ioM 1 Reply Last reply
                      0
                      • petko@social.petko.meP petko@social.petko.me

                        @mjg59 but wait, there's more

                        What if you're not renowned security expert and open-source celebrity @mjg59 (that currently works at nvidia btw, profiting from the LLM boom, sorry) but just some guy trying to make ends meet doing some coding?...

                        Now you get an LLM mandate from your company that comes with the implication that 'either you boost your productivity with 80% or we fire you and contract a cheap prompter in your place'...

                        lasombra_br@mas.toL This user is from outside of this forum
                        lasombra_br@mas.toL This user is from outside of this forum
                        lasombra_br@mas.to
                        wrote last edited by
                        #171

                        @petko @mjg59 You can see that there’s no care for any of that. It’s all “like LLMs? Good, go use it, it’s fun”. All your ethical believes go out of the window as soon as your company shares depend n the hype.

                        1 Reply Last reply
                        0
                        • mnl@hachyderm.ioM mnl@hachyderm.io

                          @engideer @david_chisnall @mjg59 @ignaloidas I don’t think llms are “rando”. They have randomized elements during training and inference, but they’re not a random number generator. I also would trust a “rando” less than an expert in real life. I wouldn’t trust either blindly either.

                          ignaloidas@not.acu.ltI This user is from outside of this forum
                          ignaloidas@not.acu.ltI This user is from outside of this forum
                          ignaloidas@not.acu.lt
                          wrote last edited by
                          #172

                          @mnl@hachyderm.io @engideer@tech.lgbt @david_chisnall@infosec.exchange @mjg59@nondeterministic.computer LLMs are very much a random number generators. The distribution is far, far from uniform, but the whole breakthrough of LLMs was the introduction of "temperature", quite literally random choices, to break them out of monotonous tendencies.

                          mnl@hachyderm.ioM 1 Reply Last reply
                          0
                          • ignaloidas@not.acu.ltI ignaloidas@not.acu.lt

                            @mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @newhinton@troet.cafe through repeated checks and knowledge that humans are consistent.


                            And like, really, you don't trust your code at all? I, for example, know that the code I wrote is not going to cheat by unit tests, not going to re-implement half of the things from scratch when I'm working on a small feature, nor will it randomly delete files. After working with people for a while, I can be fairly sure that the code they've written can be trusted to the same standards. LLMs can't be trusted with these things, and in fact have been documented to do all of these things.

                            It is not a blind, absolute trust, but trust within reason. The fact that I have to explain this to you is honestly embarrassing.

                            mnl@hachyderm.ioM This user is from outside of this forum
                            mnl@hachyderm.ioM This user is from outside of this forum
                            mnl@hachyderm.io
                            wrote last edited by
                            #173

                            @ignaloidas @mjg59 @david_chisnall @newhinton but “fairly sure” is not full trust. I can also be “fairly sure” that something works, but I’m not going to trust my judgment and instead will try to validate it and provide proper guardrails so that if it is misbehaving, it is at least contained. Some things will be just fine even if broken, some less and will make me invest me more of my time. I am not going to try to prove the kernel correct just because I am changing a css color. I don’t see how that is different with llms, and I use them everyday. If anything, they allow me to validate more.

                            ignaloidas@not.acu.ltI 1 Reply Last reply
                            0
                            • mnl@hachyderm.ioM mnl@hachyderm.io

                              @ignaloidas @mjg59 @david_chisnall @newhinton but “fairly sure” is not full trust. I can also be “fairly sure” that something works, but I’m not going to trust my judgment and instead will try to validate it and provide proper guardrails so that if it is misbehaving, it is at least contained. Some things will be just fine even if broken, some less and will make me invest me more of my time. I am not going to try to prove the kernel correct just because I am changing a css color. I don’t see how that is different with llms, and I use them everyday. If anything, they allow me to validate more.

                              ignaloidas@not.acu.ltI This user is from outside of this forum
                              ignaloidas@not.acu.ltI This user is from outside of this forum
                              ignaloidas@not.acu.lt
                              wrote last edited by
                              #174

                              @mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @newhinton@troet.cafe you are falling down the cryptocurrency fallacy, assuming that you cannot trust anyone and as such have to build stuff assuming everyone is looking to get one over you.

                              This is tiresome, and I do not care to discuss with you on this any longer, if you cannot understand that there are levels between "no trust" and "absolute trust", there is nothing more to discuss.

                              mnl@hachyderm.ioM 1 Reply Last reply
                              0
                              • ignaloidas@not.acu.ltI ignaloidas@not.acu.lt

                                @mnl@hachyderm.io @engideer@tech.lgbt @david_chisnall@infosec.exchange @mjg59@nondeterministic.computer LLMs are very much a random number generators. The distribution is far, far from uniform, but the whole breakthrough of LLMs was the introduction of "temperature", quite literally random choices, to break them out of monotonous tendencies.

                                mnl@hachyderm.ioM This user is from outside of this forum
                                mnl@hachyderm.ioM This user is from outside of this forum
                                mnl@hachyderm.io
                                wrote last edited by
                                #175

                                @ignaloidas @mjg59 @david_chisnall @engideer temperature based sampling is just one of the many sampling modalities. Nucleus sampling, top-k, frequency penalties, all of these introduce controlled randomness to improve the performance of llms as measured by a wide variety of benchmarks.

                                A random sampling of tokens would actually be uniformly distributed… and obviously grammatically correct sentences is a clear sign that we are not randomly sampling tokens.

                                Are we talking about the same thing?

                                ignaloidas@not.acu.ltI 1 Reply Last reply
                                0
                                • ignaloidas@not.acu.ltI ignaloidas@not.acu.lt

                                  @mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @newhinton@troet.cafe you are falling down the cryptocurrency fallacy, assuming that you cannot trust anyone and as such have to build stuff assuming everyone is looking to get one over you.

                                  This is tiresome, and I do not care to discuss with you on this any longer, if you cannot understand that there are levels between "no trust" and "absolute trust", there is nothing more to discuss.

                                  mnl@hachyderm.ioM This user is from outside of this forum
                                  mnl@hachyderm.ioM This user is from outside of this forum
                                  mnl@hachyderm.io
                                  wrote last edited by
                                  #176

                                  @ignaloidas @mjg59 @david_chisnall @newhinton I think you are misreading what I am saying. That is exactly what I am saying. I never fully trust my code, not a single line of it, partly because every line of my code usually requires billions of lines of code I haven’t written to run. I can apply methods and use my experience to trust it enough to run it.

                                  1 Reply Last reply
                                  0
                                  • mnl@hachyderm.ioM mnl@hachyderm.io

                                    @ignaloidas @mjg59 @david_chisnall @engideer temperature based sampling is just one of the many sampling modalities. Nucleus sampling, top-k, frequency penalties, all of these introduce controlled randomness to improve the performance of llms as measured by a wide variety of benchmarks.

                                    A random sampling of tokens would actually be uniformly distributed… and obviously grammatically correct sentences is a clear sign that we are not randomly sampling tokens.

                                    Are we talking about the same thing?

                                    ignaloidas@not.acu.ltI This user is from outside of this forum
                                    ignaloidas@not.acu.ltI This user is from outside of this forum
                                    ignaloidas@not.acu.lt
                                    wrote last edited by
                                    #177

                                    @mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @engideer@tech.lgbt the fact that something is random does not mean that it has a uniform distribution. "controlled randomness" is still randomness. Taking random points in a unit circle by taking two random numbers for distance and direction will not result in a uniform distribution, but it's still random.

                                    like, do you even read what you're writing? I'm starting to understand why you don't trust the code you wrote

                                    mnl@hachyderm.ioM 1 Reply Last reply
                                    0
                                    • ignaloidas@not.acu.ltI ignaloidas@not.acu.lt

                                      @mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @engideer@tech.lgbt the fact that something is random does not mean that it has a uniform distribution. "controlled randomness" is still randomness. Taking random points in a unit circle by taking two random numbers for distance and direction will not result in a uniform distribution, but it's still random.

                                      like, do you even read what you're writing? I'm starting to understand why you don't trust the code you wrote

                                      mnl@hachyderm.ioM This user is from outside of this forum
                                      mnl@hachyderm.ioM This user is from outside of this forum
                                      mnl@hachyderm.io
                                      wrote last edited by
                                      #178

                                      @ignaloidas @mjg59 @david_chisnall @engideer now you are talking about absolute trust. I do think we are indeed talking about different things. Do you use LLMs? Do you assign the same level of trust to qwen-3.6 than to gpt-2? because I do not, partly based on benchmarks, partly on personal experience, partly on my (admittedly perfunctory) theoretical understanding of its training and inference setup.

                                      1 Reply Last reply
                                      0
                                      • mjg59@nondeterministic.computerM mjg59@nondeterministic.computer

                                        Look, coders, we are not writers. There's no way to turn "increment this variable" into life changing prose. The creativity exists outside the code. It always has done and it always will do. Let it go.

                                        jens@social.finkhaeuser.deJ This user is from outside of this forum
                                        jens@social.finkhaeuser.deJ This user is from outside of this forum
                                        jens@social.finkhaeuser.de
                                        wrote last edited by
                                        #179

                                        @mjg59 Indeed.

                                        This is why code generation is not a solution to the problem.

                                        Which problem? People will phrase it differently, but the basic idea is to outsource *the hard part*, which is analysis and phrasing requirements to guide the LLM.

                                        LLMs suck at dealing with shitty specs. They even suck at dealing with good specs. They even suck at dealing with specs they themselves suggested.

                                        Link Preview Image
                                        Outsourcing Thought Is Going Great

                                        On AI generated test code, and how mind-bogglingly stupid that is.

                                        favicon

                                        Mad Ramblings of a Cyber Arcanist (finkhaeuser.de)

                                        So using LLMs isn't solving the problem, which is that thinking is hard.

                                        1 Reply Last reply
                                        0
                                        • petko@social.petko.meP petko@social.petko.me

                                          @mjg59 but wait, there's more

                                          What if you're not renowned security expert and open-source celebrity @mjg59 (that currently works at nvidia btw, profiting from the LLM boom, sorry) but just some guy trying to make ends meet doing some coding?...

                                          Now you get an LLM mandate from your company that comes with the implication that 'either you boost your productivity with 80% or we fire you and contract a cheap prompter in your place'...

                                          S This user is from outside of this forum
                                          S This user is from outside of this forum
                                          seanfurey@mas.to
                                          wrote last edited by
                                          #180

                                          @petko @mjg59

                                          If the cheap prompter can produce the same results, what are the arguments against this?

                                          - copyright violation in the training material
                                          - excessively high use of the world's resources for training and inference

                                          If both of those were handled (that's a big if. Maybe someday, maybe not) what were the arguments be against choosing the cheap Proctor?

                                          petko@social.petko.meP 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups