Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. https://github.com/libsdl-org/SDL/issues/15350#issuecomment-4255050646

https://github.com/libsdl-org/SDL/issues/15350#issuecomment-4255050646

Scheduled Pinned Locked Moved Uncategorized
15 Posts 4 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • pojntfx@mastodon.socialP pojntfx@mastodon.social

    @phillmv That totally makes sense to me. Some people also just have a hard ideological aversion to LLMs that I imagine developed gradually over the last three years given that the people behind those companies are some of the most annoying people on the planet hyping up each other's startups

    pojntfx@mastodon.socialP This user is from outside of this forum
    pojntfx@mastodon.socialP This user is from outside of this forum
    pojntfx@mastodon.social
    wrote last edited by
    #6

    @phillmv To me it just looks like the same schisms that happened when systemd, Wayland, Btrfs, Kubernetes, Electron, VSCode or any other tech like that was introduced

    Yeah the first iterations were not working and did not live up to the hype but people are clearly finding a lot of use from them, and if your brain is still under the assumption that things are the way they were years ago then today you're just _wrong_

    1 Reply Last reply
    0
    • pojntfx@mastodon.socialP pojntfx@mastodon.social

      Link Preview Image
      LLM Policy? · Issue #15350 · libsdl-org/SDL

      I've noticed the use of Copilot within a few reviews (13277 and 12730) which concerns me given the vast amount of issues associated with this technology (ethical, environmental, copyright, health, etc) so I was hoping a policy could be p...

      favicon

      GitHub (github.com)

      I stg the entire anti-LLM crowd is just using tooling from two years ago and living in a parallel world as a result

      Like wdym "I asked ChatGPT to make a change for me" that's not how it's worked for like more than a year, have you not engaged with the topic for like 5 minutes

      ori@hj.9fs.netO This user is from outside of this forum
      ori@hj.9fs.netO This user is from outside of this forum
      ori@hj.9fs.net
      wrote last edited by
      #7
      I pay for, and regularly test, the most recent Claude code. It can be coaxed to produce working code, but the process remains about as fun as sifting through turds for nuggets of corn.

      And that's the goal; the entire point of Claude is to make the process of writing code feel like micromanaging an idiot savant, but cheaper. As Anthropic releases new versions, the further you get from puzzle solving and the closer you get to management.

      The reason I write code is to solve puzzles, and the details are important to coming up with a good solution.
      pojntfx@mastodon.socialP 1 Reply Last reply
      0
      • ori@hj.9fs.netO ori@hj.9fs.net
        I pay for, and regularly test, the most recent Claude code. It can be coaxed to produce working code, but the process remains about as fun as sifting through turds for nuggets of corn.

        And that's the goal; the entire point of Claude is to make the process of writing code feel like micromanaging an idiot savant, but cheaper. As Anthropic releases new versions, the further you get from puzzle solving and the closer you get to management.

        The reason I write code is to solve puzzles, and the details are important to coming up with a good solution.
        pojntfx@mastodon.socialP This user is from outside of this forum
        pojntfx@mastodon.socialP This user is from outside of this forum
        pojntfx@mastodon.social
        wrote last edited by
        #8

        @ori I mean I totally agree re:details in implementations, I rarely if ever actually _commit_ anything LLM-generated, but have you not found it useful from an analysis perspective? Some things like that incredibly annoying trial-and-error loop while figuring out why the bindings generator isn't working properly for that one specific GObject class I can really short-circuit down from hours to a minute and then spend the remaining time getting the details right

        pojntfx@mastodon.socialP 1 Reply Last reply
        0
        • pojntfx@mastodon.socialP pojntfx@mastodon.social

          @ori I mean I totally agree re:details in implementations, I rarely if ever actually _commit_ anything LLM-generated, but have you not found it useful from an analysis perspective? Some things like that incredibly annoying trial-and-error loop while figuring out why the bindings generator isn't working properly for that one specific GObject class I can really short-circuit down from hours to a minute and then spend the remaining time getting the details right

          pojntfx@mastodon.socialP This user is from outside of this forum
          pojntfx@mastodon.socialP This user is from outside of this forum
          pojntfx@mastodon.social
          wrote last edited by
          #9

          @ori > the further you get from puzzle solving and the closer you get to management

          Idiot savant is a good way to put it, but idk, I also feel like I'm thinking quite a bit more now given that I can hack together shitty experiments for different ways of solving something much more easily. For example, if I'm trying to find out which CRDT makes the most sense to use with a P2P messaging framework I can just offload the process of trying the three different implementations in parallel

          pojntfx@mastodon.socialP ori@hj.9fs.netO 2 Replies Last reply
          0
          • pojntfx@mastodon.socialP pojntfx@mastodon.social

            @ori > the further you get from puzzle solving and the closer you get to management

            Idiot savant is a good way to put it, but idk, I also feel like I'm thinking quite a bit more now given that I can hack together shitty experiments for different ways of solving something much more easily. For example, if I'm trying to find out which CRDT makes the most sense to use with a P2P messaging framework I can just offload the process of trying the three different implementations in parallel

            pojntfx@mastodon.socialP This user is from outside of this forum
            pojntfx@mastodon.socialP This user is from outside of this forum
            pojntfx@mastodon.social
            wrote last edited by
            #10

            @ori In the past I'd have been far too lazy to actually try out all the options before making a decision, but maybe that's just me or specific to my problem space

            1 Reply Last reply
            0
            • pojntfx@mastodon.socialP pojntfx@mastodon.social

              Like, I can understand if you think these things are utterly useless if you use the completely wrong tool for things! This is like someone trying to use a dishwasher to run a Wayland compositor!

              lilpwa@tech.lgbtL This user is from outside of this forum
              lilpwa@tech.lgbtL This user is from outside of this forum
              lilpwa@tech.lgbt
              wrote last edited by
              #11

              I think a lot of the pushback too comes from folx who are using modern tools etc, but are literally just generating all the code and shipping it, no reviews etc. When I see this happen so often, I can see how it would slowly push people towards a "no LLMs" rule

              1 Reply Last reply
              0
              • pojntfx@mastodon.socialP pojntfx@mastodon.social

                @ori > the further you get from puzzle solving and the closer you get to management

                Idiot savant is a good way to put it, but idk, I also feel like I'm thinking quite a bit more now given that I can hack together shitty experiments for different ways of solving something much more easily. For example, if I'm trying to find out which CRDT makes the most sense to use with a P2P messaging framework I can just offload the process of trying the three different implementations in parallel

                ori@hj.9fs.netO This user is from outside of this forum
                ori@hj.9fs.netO This user is from outside of this forum
                ori@hj.9fs.net
                wrote last edited by
                #12
                That's three times the management and almost no reasoning about the details. I wouldn't trust the assessments that come out of the process, and I won't use software that was written that way unless someone pays me to.

                I think I'm probably going to get pushed out of this industry soon.
                pojntfx@mastodon.socialP 1 Reply Last reply
                0
                • ori@hj.9fs.netO ori@hj.9fs.net
                  That's three times the management and almost no reasoning about the details. I wouldn't trust the assessments that come out of the process, and I won't use software that was written that way unless someone pays me to.

                  I think I'm probably going to get pushed out of this industry soon.
                  pojntfx@mastodon.socialP This user is from outside of this forum
                  pojntfx@mastodon.socialP This user is from outside of this forum
                  pojntfx@mastodon.social
                  wrote last edited by
                  #13

                  @ori In the example here none of the code would actually be written by anyone other than a human tbc. I'm not sure about the "no reasoning" part honestly ... evaluating different implementations of things and comparing them against each other, finding out how and if the bindings would work ... that's something that at least in the contexts I'm aware of management would already have offloaded to two or three teams and pit them against each other. In small teams, that was not possible but now is

                  pojntfx@mastodon.socialP 1 Reply Last reply
                  0
                  • pojntfx@mastodon.socialP pojntfx@mastodon.social

                    @ori In the example here none of the code would actually be written by anyone other than a human tbc. I'm not sure about the "no reasoning" part honestly ... evaluating different implementations of things and comparing them against each other, finding out how and if the bindings would work ... that's something that at least in the contexts I'm aware of management would already have offloaded to two or three teams and pit them against each other. In small teams, that was not possible but now is

                    pojntfx@mastodon.socialP This user is from outside of this forum
                    pojntfx@mastodon.socialP This user is from outside of this forum
                    pojntfx@mastodon.social
                    wrote last edited by
                    #14

                    @ori But yeah I agree re:management if someone's not comfortable with that this must absolutely suck. I didn't think of it that way before.

                    Sometimes I actually use them the other way, where I just let it prompt me to implement things instead without any changes. That way you still get a mental map of what changes are actually being made. Letting myself be guided by an autocomplete model also really does sound like how ever sci-fi book I've read describes the start of human enslavement by AI

                    ori@hj.9fs.netO 1 Reply Last reply
                    0
                    • pojntfx@mastodon.socialP pojntfx@mastodon.social

                      @ori But yeah I agree re:management if someone's not comfortable with that this must absolutely suck. I didn't think of it that way before.

                      Sometimes I actually use them the other way, where I just let it prompt me to implement things instead without any changes. That way you still get a mental map of what changes are actually being made. Letting myself be guided by an autocomplete model also really does sound like how ever sci-fi book I've read describes the start of human enslavement by AI

                      ori@hj.9fs.netO This user is from outside of this forum
                      ori@hj.9fs.netO This user is from outside of this forum
                      ori@hj.9fs.net
                      wrote last edited by
                      #15
                      I'm comfortable with management, but telling people what to do is the part of it that sucks. Being in a position to nurture the growth of their abilities is what's fun.

                      Claude doesn't have that. It's a seeded deterministic function. How do you feel about trying to nurture the personal growth of an overly talkative calculator?
                      1 Reply Last reply
                      1
                      0
                      • R relay@relay.infosec.exchange shared this topic
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • World
                      • Users
                      • Groups