Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. It's clear that AI assisted coding is dividing developers (welcome to the culture wars!).

It's clear that AI assisted coding is dividing developers (welcome to the culture wars!).

Scheduled Pinned Locked Moved Uncategorized
145 Posts 48 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • jmax@mastodon.socialJ jmax@mastodon.social

    @hanshuebner @flooper @plexus And if your view of the world begins and ends with making money, as I admit is capitalist dogma, fair enough.

    But producing code with LLMs - or using them for anything which needs to be correct - is deception (whether you're deceiving yourself or others) on a massive scale, on a par with crypto, Ponzi schemes, climate denial, etc.

    (1/2)

    hanshuebner@mastodon.socialH This user is from outside of this forum
    hanshuebner@mastodon.socialH This user is from outside of this forum
    hanshuebner@mastodon.social
    wrote last edited by
    #93

    @jmax @flooper @plexus I'm not sure how you feed yourself and your kids. Maybe you are rich and don't have to worry about that. I'm not all that privileged.

    jmax@mastodon.socialJ 1 Reply Last reply
    0
    • jmax@mastodon.socialJ jmax@mastodon.social

      @hanshuebner @flooper @plexus And if your view of the world begins and ends with making money, as I admit is capitalist dogma, fair enough.

      But producing code with LLMs - or using them for anything which needs to be correct - is deception (whether you're deceiving yourself or others) on a massive scale, on a par with crypto, Ponzi schemes, climate denial, etc.

      (1/2)

      jmax@mastodon.socialJ This user is from outside of this forum
      jmax@mastodon.socialJ This user is from outside of this forum
      jmax@mastodon.social
      wrote last edited by
      #94

      @hanshuebner @flooper

      Anthropomorphizing them (as many do, but I don't think you are) is a flawed view, but does provide one useful insight.

      If one treats an LLM as a person, then the fundamental issue is:

      They are a bullshit artist with a huge library. They do not have competence at anything except bullshitting, at which they are superb.

      I agree that it's amazing that we can build a mechanical bullshit generator that's good enough to bypass most people's defenses.

      hanshuebner@mastodon.socialH 1 Reply Last reply
      0
      • plexus@toot.catP plexus@toot.cat

        It's clear that AI assisted coding is dividing developers (welcome to the culture wars!). I've seen a few blog posts now that talk about how some people just "love the craft", "delight in making something just right, like knitting", etc, as opposed to people who just "want to make it work". As if that explains the divide.

        How about this, some people resent the notion of being a babysitter to a stochastic token machine, hastening their own cognitive decline. Some people resent paying rent to a handful of US companies, all coming directly out of the TESCREAL human extinction cult, to be able to write software. Some people resent the "worse is better" steady decline of software quality over the past two decades, now supercharged. Some people resent that the hegemonic computing ecosystem is entirely shaped by the logic of venture capital. Some people hate that the digital commons is walled off and sold back to us. Oh and I guess some people also don't like the thought of making coding several orders of magnitude more energy intensive during a climate emergency.

        But sure, no, it's really because we mourn the loss of our hobby.

        fenixmaster@mastodon.socialF This user is from outside of this forum
        fenixmaster@mastodon.socialF This user is from outside of this forum
        fenixmaster@mastodon.social
        wrote last edited by
        #95

        @plexus Because AI did not create a programming language, because AI did not create a compiler, because AI did not create a linker, AI can not create software.

        1 Reply Last reply
        0
        • hanshuebner@mastodon.socialH hanshuebner@mastodon.social

          @matt @dalias You use an LLM because it makes the code writing part take radically less time.

          matt@toot.cafeM This user is from outside of this forum
          matt@toot.cafeM This user is from outside of this forum
          matt@toot.cafe
          wrote last edited by
          #96

          @hanshuebner @dalias But then you have to spend time putting guardrails in place (e.g. comprehensive tests) to make sure the LLM doesn't do something wrong; using an LLM is rolling the dice, after all. Now, if you believe that one should always put maximal guardrails in place anyway even for human-written code, then I suppose the faster code generation could still be a net gain. But I'm not sure there's one correct answer to how much one should invest in guardrails (tests, types, lints, etc.).

          matt@toot.cafeM hanshuebner@mastodon.socialH 2 Replies Last reply
          0
          • hanshuebner@mastodon.socialH hanshuebner@mastodon.social

            @jmax @flooper @plexus I'm not sure how you feed yourself and your kids. Maybe you are rich and don't have to worry about that. I'm not all that privileged.

            jmax@mastodon.socialJ This user is from outside of this forum
            jmax@mastodon.socialJ This user is from outside of this forum
            jmax@mastodon.social
            wrote last edited by
            #97

            @hanshuebner @flooper @plexus I work for a living and try to avoid dishonesty while doing so.

            Since I understand that LLMs are fundamentally and inherently dishonest, that doesn't leave much wiggle room for me.

            hanshuebner@mastodon.socialH 1 Reply Last reply
            0
            • plexus@toot.catP plexus@toot.cat

              It's clear that AI assisted coding is dividing developers (welcome to the culture wars!). I've seen a few blog posts now that talk about how some people just "love the craft", "delight in making something just right, like knitting", etc, as opposed to people who just "want to make it work". As if that explains the divide.

              How about this, some people resent the notion of being a babysitter to a stochastic token machine, hastening their own cognitive decline. Some people resent paying rent to a handful of US companies, all coming directly out of the TESCREAL human extinction cult, to be able to write software. Some people resent the "worse is better" steady decline of software quality over the past two decades, now supercharged. Some people resent that the hegemonic computing ecosystem is entirely shaped by the logic of venture capital. Some people hate that the digital commons is walled off and sold back to us. Oh and I guess some people also don't like the thought of making coding several orders of magnitude more energy intensive during a climate emergency.

              But sure, no, it's really because we mourn the loss of our hobby.

              lumi@snug.moeL This user is from outside of this forum
              lumi@snug.moeL This user is from outside of this forum
              lumi@snug.moe
              wrote last edited by
              #98

              @plexus and i also feel we should be standing in solidarity with other affected professions to form a unified front against all generative "ai"

              stand together with artists, writers, journalists, translators, etc etc against this morally corrupt technology

              flesh@transfem.socialF 1 Reply Last reply
              0
              • jmax@mastodon.socialJ jmax@mastodon.social

                @hanshuebner @flooper

                Anthropomorphizing them (as many do, but I don't think you are) is a flawed view, but does provide one useful insight.

                If one treats an LLM as a person, then the fundamental issue is:

                They are a bullshit artist with a huge library. They do not have competence at anything except bullshitting, at which they are superb.

                I agree that it's amazing that we can build a mechanical bullshit generator that's good enough to bypass most people's defenses.

                hanshuebner@mastodon.socialH This user is from outside of this forum
                hanshuebner@mastodon.socialH This user is from outside of this forum
                hanshuebner@mastodon.social
                wrote last edited by
                #99

                @jmax @flooper I think I'm with you. The difficult part of LLMs for code generation for me is that the bullshit is executable. I can and do dismiss AI "prose", "art" and "music" easily because it is devoid of what makes me want to consume the thing in the first place. Code is primarirly consumed by machines, however, and its primary purpose is the functionality that it provides. That sets it apart from other slop.

                jmax@mastodon.socialJ 1 Reply Last reply
                0
                • jmax@mastodon.socialJ jmax@mastodon.social

                  @hanshuebner @flooper @plexus I work for a living and try to avoid dishonesty while doing so.

                  Since I understand that LLMs are fundamentally and inherently dishonest, that doesn't leave much wiggle room for me.

                  hanshuebner@mastodon.socialH This user is from outside of this forum
                  hanshuebner@mastodon.socialH This user is from outside of this forum
                  hanshuebner@mastodon.social
                  wrote last edited by
                  #100

                  @jmax @flooper Machines don't have a concept of honesty, but I think I know what you mean. Thank you for participating in this exchange!

                  jmax@mastodon.socialJ 1 Reply Last reply
                  0
                  • matt@toot.cafeM matt@toot.cafe

                    @hanshuebner @dalias But then you have to spend time putting guardrails in place (e.g. comprehensive tests) to make sure the LLM doesn't do something wrong; using an LLM is rolling the dice, after all. Now, if you believe that one should always put maximal guardrails in place anyway even for human-written code, then I suppose the faster code generation could still be a net gain. But I'm not sure there's one correct answer to how much one should invest in guardrails (tests, types, lints, etc.).

                    matt@toot.cafeM This user is from outside of this forum
                    matt@toot.cafeM This user is from outside of this forum
                    matt@toot.cafe
                    wrote last edited by
                    #101

                    @hanshuebner For example, I write in Rust. I find that I never again want to do without the strong static typing, the controlled mutability, and the borrow checker. But @dalias writes excellent C code without these things. Would I trust an LLM to write C code like Rich does? Never. My point is that if the code is written by skilled humans, you don't necessarily need guardrails to the extent that you do for LLM-extruded code. So do LLMs really save time, *for high-quality code*? I'm skeptical.

                    hanshuebner@mastodon.socialH 1 Reply Last reply
                    0
                    • schaueho@functional.cafeS schaueho@functional.cafe

                      @hanshuebner What does "software is better" even mean in this context?

                      I wonder if this entire "LLM generated code is good enough and it's creation is much more efficient" argument will stand the test of time when a lot of code is generated on the same product / project by many people. We do not know the answer to this yet.
                      @grishka

                      dusk@todon.euD This user is from outside of this forum
                      dusk@todon.euD This user is from outside of this forum
                      dusk@todon.eu
                      wrote last edited by
                      #102

                      @schaueho @hanshuebner @grishka

                      1/2

                      > "LLM generated code is good enough and it's creation is much more efficient" argument will stand the test of time when a lot of code is generated on the same product / project by many people. We do not know the answer to this yet.

                      I do suspect that some of the divide we see in the debate on Mastodon relates to the fact that some of the people arguing against it have not used LLMs to assist in writing Very Good Software At Scale using the methodologies available today.

                      I ship software to ~6 million monthly active users, with confidence, using Claude Code. I haven't written code by hand in ~10 months.

                      So, to the "We do not know the answer to this yet.", I think that we do.

                      We know that LLMs, used naively, make mistakes. And a craftsperson who knows the limitation of their tools (LLMs) can mitigate and verify in a number of ways.

                      Concession: LLMs were not ethically trained. Data centers are having awful impact on the energy grid + water use. I will never begrudge a person's choice to boycott.

                      Counterpoint: today, LLMs running on e.g. Apple Silicon approach the performance of the SOTA models. We're gonna see more of this, which will mitigate the individual's environmental impact as well as the need to pay forever-rent to big tech.

                      dusk@todon.euD 1 Reply Last reply
                      0
                      • matt@toot.cafeM matt@toot.cafe

                        @hanshuebner @dalias But then you have to spend time putting guardrails in place (e.g. comprehensive tests) to make sure the LLM doesn't do something wrong; using an LLM is rolling the dice, after all. Now, if you believe that one should always put maximal guardrails in place anyway even for human-written code, then I suppose the faster code generation could still be a net gain. But I'm not sure there's one correct answer to how much one should invest in guardrails (tests, types, lints, etc.).

                        hanshuebner@mastodon.socialH This user is from outside of this forum
                        hanshuebner@mastodon.socialH This user is from outside of this forum
                        hanshuebner@mastodon.social
                        wrote last edited by
                        #103

                        @matt @dalias In my own anecdotal and personal experience, LLM code gets enough things right to be competitive, but that experience is just a couple of months old. I can say, though, that the systems I created with LLM help very much fulfilled a real purpose, did not break randomly and are maintainable with LLM help.

                        dalias@hachyderm.ioD 1 Reply Last reply
                        0
                        • matt@toot.cafeM matt@toot.cafe

                          @hanshuebner For example, I write in Rust. I find that I never again want to do without the strong static typing, the controlled mutability, and the borrow checker. But @dalias writes excellent C code without these things. Would I trust an LLM to write C code like Rich does? Never. My point is that if the code is written by skilled humans, you don't necessarily need guardrails to the extent that you do for LLM-extruded code. So do LLMs really save time, *for high-quality code*? I'm skeptical.

                          hanshuebner@mastodon.socialH This user is from outside of this forum
                          hanshuebner@mastodon.socialH This user is from outside of this forum
                          hanshuebner@mastodon.social
                          wrote last edited by
                          #104

                          @matt @dalias One part of the conversation is of course the craftmanship - You write high-quality code as a matter of your ethos, and you employ the tools that you believe help you do that best. While other developers can be the judge of that, your users really cannot. To them, it is the external behavior of your code that matters.

                          Now, you can argue how user satisfaction is possible only with high-quality code, but that'd be mostly a theoretical discussion because most code in existence 1/

                          hanshuebner@mastodon.socialH 1 Reply Last reply
                          0
                          • dusk@todon.euD dusk@todon.eu

                            @schaueho @hanshuebner @grishka

                            1/2

                            > "LLM generated code is good enough and it's creation is much more efficient" argument will stand the test of time when a lot of code is generated on the same product / project by many people. We do not know the answer to this yet.

                            I do suspect that some of the divide we see in the debate on Mastodon relates to the fact that some of the people arguing against it have not used LLMs to assist in writing Very Good Software At Scale using the methodologies available today.

                            I ship software to ~6 million monthly active users, with confidence, using Claude Code. I haven't written code by hand in ~10 months.

                            So, to the "We do not know the answer to this yet.", I think that we do.

                            We know that LLMs, used naively, make mistakes. And a craftsperson who knows the limitation of their tools (LLMs) can mitigate and verify in a number of ways.

                            Concession: LLMs were not ethically trained. Data centers are having awful impact on the energy grid + water use. I will never begrudge a person's choice to boycott.

                            Counterpoint: today, LLMs running on e.g. Apple Silicon approach the performance of the SOTA models. We're gonna see more of this, which will mitigate the individual's environmental impact as well as the need to pay forever-rent to big tech.

                            dusk@todon.euD This user is from outside of this forum
                            dusk@todon.euD This user is from outside of this forum
                            dusk@todon.eu
                            wrote last edited by
                            #105

                            @schaueho @hanshuebner @grishka

                            I will never begrudge a person's decision to boycott LLM usage.

                            But I do grow weary of folk on Mastodon earnestly insisting that "the flaws in LLMs will somehow all be laid bare, and handcrafted, artisanal code is somehow inherently superior"

                            Y'all cheering for John Henry without understanding that this is a job that's actually very well suited for a machine.

                            Link Preview Image
                            John Henry (folklore) - Wikipedia

                            favicon

                            (en.wikipedia.org)

                            boydstephensmithjr@hachyderm.ioB 1 Reply Last reply
                            0
                            • hanshuebner@mastodon.socialH hanshuebner@mastodon.social

                              @matt @dalias In my own anecdotal and personal experience, LLM code gets enough things right to be competitive, but that experience is just a couple of months old. I can say, though, that the systems I created with LLM help very much fulfilled a real purpose, did not break randomly and are maintainable with LLM help.

                              dalias@hachyderm.ioD This user is from outside of this forum
                              dalias@hachyderm.ioD This user is from outside of this forum
                              dalias@hachyderm.io
                              wrote last edited by
                              #106

                              @hanshuebner @matt Your anecdotes are contrary to empirical evidence and the mechanisms of how the thing works. I think at this point we can say you've bought into the parlor tricks and there's not much point in having this conversation.

                              1 Reply Last reply
                              0
                              • hanshuebner@mastodon.socialH hanshuebner@mastodon.social

                                @jmax @flooper I think I'm with you. The difficult part of LLMs for code generation for me is that the bullshit is executable. I can and do dismiss AI "prose", "art" and "music" easily because it is devoid of what makes me want to consume the thing in the first place. Code is primarirly consumed by machines, however, and its primary purpose is the functionality that it provides. That sets it apart from other slop.

                                jmax@mastodon.socialJ This user is from outside of this forum
                                jmax@mastodon.socialJ This user is from outside of this forum
                                jmax@mastodon.social
                                wrote last edited by
                                #107

                                @hanshuebner @flooper And the assumption that it's OK to build high rise apartments from paper mache, which is what I'm being asked to swallow, is not OK.

                                And the fact that we have a sophisticated machine for patching together buildings from recycled concrete slabs patched together with paper mache - carefully concealed where possible, or skillfully painted with stucco where necessary - makes it worse, not better.

                                Even if they do stand up for a little while before they collapse.

                                hanshuebner@mastodon.socialH 1 Reply Last reply
                                0
                                • hanshuebner@mastodon.socialH hanshuebner@mastodon.social

                                  @jmax @flooper Machines don't have a concept of honesty, but I think I know what you mean. Thank you for participating in this exchange!

                                  jmax@mastodon.socialJ This user is from outside of this forum
                                  jmax@mastodon.socialJ This user is from outside of this forum
                                  jmax@mastodon.social
                                  wrote last edited by
                                  #108

                                  @hanshuebner @flooper Yes. But useful tools are those machines which do have honesty, in a mechanical sense.

                                  1 Reply Last reply
                                  0
                                  • hanshuebner@mastodon.socialH hanshuebner@mastodon.social

                                    @matt @dalias One part of the conversation is of course the craftmanship - You write high-quality code as a matter of your ethos, and you employ the tools that you believe help you do that best. While other developers can be the judge of that, your users really cannot. To them, it is the external behavior of your code that matters.

                                    Now, you can argue how user satisfaction is possible only with high-quality code, but that'd be mostly a theoretical discussion because most code in existence 1/

                                    hanshuebner@mastodon.socialH This user is from outside of this forum
                                    hanshuebner@mastodon.socialH This user is from outside of this forum
                                    hanshuebner@mastodon.social
                                    wrote last edited by
                                    #109

                                    @matt @dalias is not of high quality.

                                    So we have an internal and an external view on quality that are not necessarily the same. At the same time, we have the external force for functionality, and I'd argue that to users, that force is more important than the internal quality of the code, which matters (only) to us.

                                    The realization that with LLM help, people can create something that satisfies the desire of users in short amount of time will create more pull towards meeting those desires. 2/

                                    hanshuebner@mastodon.socialH 1 Reply Last reply
                                    0
                                    • hanshuebner@mastodon.socialH hanshuebner@mastodon.social

                                      @matt @dalias is not of high quality.

                                      So we have an internal and an external view on quality that are not necessarily the same. At the same time, we have the external force for functionality, and I'd argue that to users, that force is more important than the internal quality of the code, which matters (only) to us.

                                      The realization that with LLM help, people can create something that satisfies the desire of users in short amount of time will create more pull towards meeting those desires. 2/

                                      hanshuebner@mastodon.socialH This user is from outside of this forum
                                      hanshuebner@mastodon.socialH This user is from outside of this forum
                                      hanshuebner@mastodon.social
                                      wrote last edited by
                                      #110

                                      @matt @dalias Saying that the desire can't be met because we can't create the software with the internal quality that we desire won't be successful in the long run.

                                      Sure, some users will use bad software written with LLM help, blame it on LLMs and then ask for a handcrafted solution, if they can afford it. But that will be the exception, not the norm.

                                      This is why I believe that as a software developer, I need to know how to work with LLMs rather than avoid them. YMMV. 3/3

                                      matt@toot.cafeM 1 Reply Last reply
                                      0
                                      • hanshuebner@mastodon.socialH hanshuebner@mastodon.social

                                        @matt @dalias Saying that the desire can't be met because we can't create the software with the internal quality that we desire won't be successful in the long run.

                                        Sure, some users will use bad software written with LLM help, blame it on LLMs and then ask for a handcrafted solution, if they can afford it. But that will be the exception, not the norm.

                                        This is why I believe that as a software developer, I need to know how to work with LLMs rather than avoid them. YMMV. 3/3

                                        matt@toot.cafeM This user is from outside of this forum
                                        matt@toot.cafeM This user is from outside of this forum
                                        matt@toot.cafe
                                        wrote last edited by
                                        #111

                                        @hanshuebner @dalias I'm sympathetic to the argument that creating more software and implementing more features faster isn't just about making money, but about solving problems, including problems that are causing suffering because a solution hasn't yet been implemented. I'm thinking in particular of the field that I work in, accessibility for disabled people. But programming via gambling, as one does when using an LLM, isn't the only way to address that urgency.

                                        hanshuebner@mastodon.socialH 1 Reply Last reply
                                        0
                                        • jmax@mastodon.socialJ jmax@mastodon.social

                                          @hanshuebner @flooper And the assumption that it's OK to build high rise apartments from paper mache, which is what I'm being asked to swallow, is not OK.

                                          And the fact that we have a sophisticated machine for patching together buildings from recycled concrete slabs patched together with paper mache - carefully concealed where possible, or skillfully painted with stucco where necessary - makes it worse, not better.

                                          Even if they do stand up for a little while before they collapse.

                                          hanshuebner@mastodon.socialH This user is from outside of this forum
                                          hanshuebner@mastodon.socialH This user is from outside of this forum
                                          hanshuebner@mastodon.social
                                          wrote last edited by
                                          #112

                                          @jmax @flooper To stay in that analogy: If you, the developer, ask the LLM to create a high-rise out of paper mache, it'll gladly do so. It is your job as the software developer to create the architecture.

                                          As the old adage goes: You can write bad FORTRAN in any language.

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups