Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. People keep assuring me that LLMs writing code is a revolution, that as long as we maintain sound engineering practices and tight code review they're actually extruding code fit for purpose in a fraction of the time it would take a human.

People keep assuring me that LLMs writing code is a revolution, that as long as we maintain sound engineering practices and tight code review they're actually extruding code fit for purpose in a fraction of the time it would take a human.

Scheduled Pinned Locked Moved Uncategorized
15 Posts 11 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • bodil@social.treehouse.systemsB bodil@social.treehouse.systems

    People keep assuring me that LLMs writing code is a revolution, that as long as we maintain sound engineering practices and tight code review they're actually extruding code fit for purpose in a fraction of the time it would take a human.

    And every damned time, every damned time any of that code surfaces, like Anthropic's flagship offering just did, somehow it's exactly the pile of steaming technical debt and fifteen year old Stack Overflow snippets we were assured your careful oversight made sure it isn't.

    Can someone please explain this to me? Is everyone but you simply prompting it wrong?

    It's a good thing programmers aren't susceptible to hubris in any way, or this would have been so much worse.

    benjamineskola@hachyderm.ioB This user is from outside of this forum
    benjamineskola@hachyderm.ioB This user is from outside of this forum
    benjamineskola@hachyderm.io
    wrote last edited by
    #4

    @bodil “you still need a human in the loop”, they tell me, while consistently failing to be at all effective when they’re the human that should be in the loop.

    1 Reply Last reply
    0
    • bodil@social.treehouse.systemsB bodil@social.treehouse.systems

      People keep assuring me that LLMs writing code is a revolution, that as long as we maintain sound engineering practices and tight code review they're actually extruding code fit for purpose in a fraction of the time it would take a human.

      And every damned time, every damned time any of that code surfaces, like Anthropic's flagship offering just did, somehow it's exactly the pile of steaming technical debt and fifteen year old Stack Overflow snippets we were assured your careful oversight made sure it isn't.

      Can someone please explain this to me? Is everyone but you simply prompting it wrong?

      It's a good thing programmers aren't susceptible to hubris in any way, or this would have been so much worse.

      bodil@social.treehouse.systemsB This user is from outside of this forum
      bodil@social.treehouse.systemsB This user is from outside of this forum
      bodil@social.treehouse.systems
      wrote last edited by
      #5

      You know, it isn't even that tools like this are useless. There are absolutely things they could be good at. I've personally seen Claude find stupid little bugs you'd spend an hour figuring out and hating yourself for afterwards with great efficiency. I tried the first iteration of Copilot, back when it was just an aggressive autocomplete, and while I had to stop using it because it was overconfidently trying to finish my programs for me without being asked, it was great for filling in boilerplate and maybe even a couple lines of real code for the basic stuff. We have models nowadays that are actually trained to find bugs and security issues in code rather than having the entire internets thrown at them to produce something Altman & Amodei can sell to the gullible as AGI.

      But there's the problem. The technology has been around for a while, we have a good idea of what it's good for and, more importantly, what it's not. "Our revolutionary expert system for finding bugs in your code" isn't nearly as marketable to the general public, and the CEO class especially, as "our revolutionary PhD level sentient AI that will solve all the world's problems if you only give us another couple trillion dollars, and also wants to be your girlfriend." And so we get Claude and ChatGPT and RAM shortages and AI psychosis and accelerated climate change instead of smaller, focused models that are actually good at their specialist subjects. Because those don't produce as much shareholder value.

      flaki@flaki.socialF 1 Reply Last reply
      0
      • bodil@social.treehouse.systemsB bodil@social.treehouse.systems

        People keep assuring me that LLMs writing code is a revolution, that as long as we maintain sound engineering practices and tight code review they're actually extruding code fit for purpose in a fraction of the time it would take a human.

        And every damned time, every damned time any of that code surfaces, like Anthropic's flagship offering just did, somehow it's exactly the pile of steaming technical debt and fifteen year old Stack Overflow snippets we were assured your careful oversight made sure it isn't.

        Can someone please explain this to me? Is everyone but you simply prompting it wrong?

        It's a good thing programmers aren't susceptible to hubris in any way, or this would have been so much worse.

        cargot_robbie@urbanists.socialC This user is from outside of this forum
        cargot_robbie@urbanists.socialC This user is from outside of this forum
        cargot_robbie@urbanists.social
        wrote last edited by
        #6

        @bodil I imagine that the fact that no one has to dive into the spaghetti means they don't care about it. Treating it like bytecode or binaries, the optimization and maintenance of which are Somebody Else's Problem™. I've only just started reading about folks profiling the trash heaps these things spit out, and it doesn't look great.

        1 Reply Last reply
        0
        • bodil@social.treehouse.systemsB bodil@social.treehouse.systems

          People keep assuring me that LLMs writing code is a revolution, that as long as we maintain sound engineering practices and tight code review they're actually extruding code fit for purpose in a fraction of the time it would take a human.

          And every damned time, every damned time any of that code surfaces, like Anthropic's flagship offering just did, somehow it's exactly the pile of steaming technical debt and fifteen year old Stack Overflow snippets we were assured your careful oversight made sure it isn't.

          Can someone please explain this to me? Is everyone but you simply prompting it wrong?

          It's a good thing programmers aren't susceptible to hubris in any way, or this would have been so much worse.

          oysteivi@snabelen.noO This user is from outside of this forum
          oysteivi@snabelen.noO This user is from outside of this forum
          oysteivi@snabelen.no
          wrote last edited by
          #7

          @bodil
          I work in ops, not development, but those sound engineering practices and tight code reviews must be partly theater to guilt people into submitting better work in the first place, right? Too bad Claude code isn't a human with any sense of shame.

          1 Reply Last reply
          0
          • bodil@social.treehouse.systemsB bodil@social.treehouse.systems

            People keep assuring me that LLMs writing code is a revolution, that as long as we maintain sound engineering practices and tight code review they're actually extruding code fit for purpose in a fraction of the time it would take a human.

            And every damned time, every damned time any of that code surfaces, like Anthropic's flagship offering just did, somehow it's exactly the pile of steaming technical debt and fifteen year old Stack Overflow snippets we were assured your careful oversight made sure it isn't.

            Can someone please explain this to me? Is everyone but you simply prompting it wrong?

            It's a good thing programmers aren't susceptible to hubris in any way, or this would have been so much worse.

            hopeless@mas.toH This user is from outside of this forum
            hopeless@mas.toH This user is from outside of this forum
            hopeless@mas.to
            wrote last edited by
            #8

            @bodil

            > Can someone please explain this to me?

            Sure: code with the job of managing a natural language LLM isn't going to look like procedural code you're used to.

            If you have doubts whether coding assistants like https://antigravity.google are any use, download it, try it on your own code with your own choice of tasks and find out.

            You can throw the changes away if you are worried about getting contaminated.

            You can write about your experiment here. And, you will actually know.

            1 Reply Last reply
            0
            • bodil@social.treehouse.systemsB bodil@social.treehouse.systems

              You know, it isn't even that tools like this are useless. There are absolutely things they could be good at. I've personally seen Claude find stupid little bugs you'd spend an hour figuring out and hating yourself for afterwards with great efficiency. I tried the first iteration of Copilot, back when it was just an aggressive autocomplete, and while I had to stop using it because it was overconfidently trying to finish my programs for me without being asked, it was great for filling in boilerplate and maybe even a couple lines of real code for the basic stuff. We have models nowadays that are actually trained to find bugs and security issues in code rather than having the entire internets thrown at them to produce something Altman & Amodei can sell to the gullible as AGI.

              But there's the problem. The technology has been around for a while, we have a good idea of what it's good for and, more importantly, what it's not. "Our revolutionary expert system for finding bugs in your code" isn't nearly as marketable to the general public, and the CEO class especially, as "our revolutionary PhD level sentient AI that will solve all the world's problems if you only give us another couple trillion dollars, and also wants to be your girlfriend." And so we get Claude and ChatGPT and RAM shortages and AI psychosis and accelerated climate change instead of smaller, focused models that are actually good at their specialist subjects. Because those don't produce as much shareholder value.

              flaki@flaki.socialF This user is from outside of this forum
              flaki@flaki.socialF This user is from outside of this forum
              flaki@flaki.social
              wrote last edited by
              #9

              @bodil I liked @mmasnick's take on how mayyyybe there's a silver lining in code-generating that it can help re-democratize personal computing in which it's not the personal computer but also the software can be customized and home-grown.

              I like to think that sammy boi is out there, trying to buy up the world's complete silicon wafer production because he spends his sleepless nights dreading gen AI breaking loose of his ilk's corporate capture.

              I'm sure many of us won't gleefully march into local-AI boosterism without addressing the (open-weight) elephant in the room, maybe one way truly open & fair models will leave the fairy realm of the Mozilla Foundation "Wouldn't It Be Cool..?!!" list.

              Like, waiting for the "AI bubble to pop" is like hoping for an alien invasion: all it will bring is pain and destruction with no clear "ok, what now?" that follows. I like the _hopefulness_ of his perceived trajectory and I truly hope we get there before we split the planet in half. 😶

              flaki@flaki.socialF 1 Reply Last reply
              0
              • bodil@social.treehouse.systemsB bodil@social.treehouse.systems

                People keep assuring me that LLMs writing code is a revolution, that as long as we maintain sound engineering practices and tight code review they're actually extruding code fit for purpose in a fraction of the time it would take a human.

                And every damned time, every damned time any of that code surfaces, like Anthropic's flagship offering just did, somehow it's exactly the pile of steaming technical debt and fifteen year old Stack Overflow snippets we were assured your careful oversight made sure it isn't.

                Can someone please explain this to me? Is everyone but you simply prompting it wrong?

                It's a good thing programmers aren't susceptible to hubris in any way, or this would have been so much worse.

                mathis@metalhead.clubM This user is from outside of this forum
                mathis@metalhead.clubM This user is from outside of this forum
                mathis@metalhead.club
                wrote last edited by
                #10

                @bodil Anthropomorphic is not maintaining sound engineering practices. It's just impossible at the speed they're pushing. The way the claude code tech lead talks about it it's clear that there's no tight code review. It's a company pushing the coding is solved, SWE is dead narrative. The last thing they want to admit is that even if the code is pretty good, you still need human in the loop

                1 Reply Last reply
                0
                • flaki@flaki.socialF flaki@flaki.social

                  @bodil I liked @mmasnick's take on how mayyyybe there's a silver lining in code-generating that it can help re-democratize personal computing in which it's not the personal computer but also the software can be customized and home-grown.

                  I like to think that sammy boi is out there, trying to buy up the world's complete silicon wafer production because he spends his sleepless nights dreading gen AI breaking loose of his ilk's corporate capture.

                  I'm sure many of us won't gleefully march into local-AI boosterism without addressing the (open-weight) elephant in the room, maybe one way truly open & fair models will leave the fairy realm of the Mozilla Foundation "Wouldn't It Be Cool..?!!" list.

                  Like, waiting for the "AI bubble to pop" is like hoping for an alien invasion: all it will bring is pain and destruction with no clear "ok, what now?" that follows. I like the _hopefulness_ of his perceived trajectory and I truly hope we get there before we split the planet in half. 😶

                  flaki@flaki.socialF This user is from outside of this forum
                  flaki@flaki.socialF This user is from outside of this forum
                  flaki@flaki.social
                  wrote last edited by
                  #11

                  @bodil ( https://www.techdirt.com/2026/03/25/ai-might-be-our-best-shot-at-taking-back-the-open-web/ )

                  janl@narrativ.esJ 1 Reply Last reply
                  0
                  • flaki@flaki.socialF flaki@flaki.social

                    @bodil ( https://www.techdirt.com/2026/03/25/ai-might-be-our-best-shot-at-taking-back-the-open-web/ )

                    janl@narrativ.esJ This user is from outside of this forum
                    janl@narrativ.esJ This user is from outside of this forum
                    janl@narrativ.es
                    wrote last edited by
                    #12

                    @flaki @bodil Note that for one of the notable examples in this article (Fray) the author (Derek) has debunked the analogy.

                    flaki@flaki.socialF 1 Reply Last reply
                    0
                    • janl@narrativ.esJ janl@narrativ.es

                      @flaki @bodil Note that for one of the notable examples in this article (Fray) the author (Derek) has debunked the analogy.

                      flaki@flaki.socialF This user is from outside of this forum
                      flaki@flaki.socialF This user is from outside of this forum
                      flaki@flaki.social
                      wrote last edited by
                      #13

                      @janl @bodil ugh, haven't seen his comment before, but honestly not surprised about his reaction 😞

                      1 Reply Last reply
                      0
                      • bodil@social.treehouse.systemsB bodil@social.treehouse.systems

                        People keep assuring me that LLMs writing code is a revolution, that as long as we maintain sound engineering practices and tight code review they're actually extruding code fit for purpose in a fraction of the time it would take a human.

                        And every damned time, every damned time any of that code surfaces, like Anthropic's flagship offering just did, somehow it's exactly the pile of steaming technical debt and fifteen year old Stack Overflow snippets we were assured your careful oversight made sure it isn't.

                        Can someone please explain this to me? Is everyone but you simply prompting it wrong?

                        It's a good thing programmers aren't susceptible to hubris in any way, or this would have been so much worse.

                        patrys@mastodon.onlineP This user is from outside of this forum
                        patrys@mastodon.onlineP This user is from outside of this forum
                        patrys@mastodon.online
                        wrote last edited by
                        #14

                        @bodil

                        Oh no, the probability engine is producing average output.

                        surprisedpikachu.jpeg

                        1 Reply Last reply
                        0
                        • hopeless@mas.toH This user is from outside of this forum
                          hopeless@mas.toH This user is from outside of this forum
                          hopeless@mas.to
                          wrote last edited by
                          #15

                          @Landa @bodil

                          > Your explanation just restates the observation

                          OP has a point and a question... the point is Anthropic's leak not looking like they expected. It's because its job is not what they are used to.

                          The question is "are LLMs useful for writing code". To which I encourage them to stop being passive-aggressive about it and actually find out, and write about it, like a human with agency.

                          Your response is "just" denial. Please let us know your experience with antigravity...

                          1 Reply Last reply
                          0
                          • R relay@relay.publicsquare.global shared this topic
                          Reply
                          • Reply as topic
                          Log in to reply
                          • Oldest to Newest
                          • Newest to Oldest
                          • Most Votes


                          • Login

                          • Login or register to search.
                          • First post
                            Last post
                          0
                          • Categories
                          • Recent
                          • Tags
                          • Popular
                          • World
                          • Users
                          • Groups