Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. It's clear that AI assisted coding is dividing developers (welcome to the culture wars!).

It's clear that AI assisted coding is dividing developers (welcome to the culture wars!).

Scheduled Pinned Locked Moved Uncategorized
145 Posts 48 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • plexus@toot.catP plexus@toot.cat

    It's clear that AI assisted coding is dividing developers (welcome to the culture wars!). I've seen a few blog posts now that talk about how some people just "love the craft", "delight in making something just right, like knitting", etc, as opposed to people who just "want to make it work". As if that explains the divide.

    How about this, some people resent the notion of being a babysitter to a stochastic token machine, hastening their own cognitive decline. Some people resent paying rent to a handful of US companies, all coming directly out of the TESCREAL human extinction cult, to be able to write software. Some people resent the "worse is better" steady decline of software quality over the past two decades, now supercharged. Some people resent that the hegemonic computing ecosystem is entirely shaped by the logic of venture capital. Some people hate that the digital commons is walled off and sold back to us. Oh and I guess some people also don't like the thought of making coding several orders of magnitude more energy intensive during a climate emergency.

    But sure, no, it's really because we mourn the loss of our hobby.

    janeishly@beige.partyJ This user is from outside of this forum
    janeishly@beige.partyJ This user is from outside of this forum
    janeishly@beige.party
    wrote last edited by
    #83

    @plexus Translators are hearing this all the time too (with a side helping of "you just hate technology" I'm assuming devs don't get!) No, we just want the job done right.

    If we'd realised earlier that clients would accept any old shit provided it looked like roughly the right language, we'd all have made a lot more money.

    1 Reply Last reply
    0
    • matt@toot.cafeM matt@toot.cafe

      @hanshuebner @dalias If LLMs create shitty prose, images, and music, why is code the exception? Simply because that's the area that we work in and we're afraid of losing our jobs? (I admit I'm not immune to that fear.)

      hanshuebner@mastodon.socialH This user is from outside of this forum
      hanshuebner@mastodon.socialH This user is from outside of this forum
      hanshuebner@mastodon.social
      wrote last edited by
      #84

      @matt @dalias Code is different because it has a function that is beyond human reception.

      matt@toot.cafeM 1 Reply Last reply
      0
      • hanshuebner@mastodon.socialH hanshuebner@mastodon.social

        @matt @dalias Code is different because it has a function that is beyond human reception.

        matt@toot.cafeM This user is from outside of this forum
        matt@toot.cafeM This user is from outside of this forum
        matt@toot.cafe
        wrote last edited by
        #85

        @hanshuebner @dalias The details still matter though. The same lack of attention to detail that makes LLM prose, images, and music shitty, will come back to bite us, or the people affected by our work, sooner or later, in the form of defects. So I'd rather give each detail the attention it deserves, by writing the code myself, than roll the dice and find out later that some detail in that mass of LLM-extruded code was wrong -- possibly subtly wrong, in a way that's easy to miss in review.

        hanshuebner@mastodon.socialH 1 Reply Last reply
        0
        • hanshuebner@mastodon.socialH hanshuebner@mastodon.social

          @can @plexus Sorry. I'm not great at words.

          manutoky@det.socialM This user is from outside of this forum
          manutoky@det.socialM This user is from outside of this forum
          manutoky@det.social
          wrote last edited by
          #86

          @hanshuebner @can @plexus Actually, you are doing great putting my exact feelings into words. Thanks for that!

          1 Reply Last reply
          0
          • matt@toot.cafeM matt@toot.cafe

            @hanshuebner @dalias The details still matter though. The same lack of attention to detail that makes LLM prose, images, and music shitty, will come back to bite us, or the people affected by our work, sooner or later, in the form of defects. So I'd rather give each detail the attention it deserves, by writing the code myself, than roll the dice and find out later that some detail in that mass of LLM-extruded code was wrong -- possibly subtly wrong, in a way that's easy to miss in review.

            hanshuebner@mastodon.socialH This user is from outside of this forum
            hanshuebner@mastodon.socialH This user is from outside of this forum
            hanshuebner@mastodon.social
            wrote last edited by
            #87

            @matt @dalias You are absolutely right, but here's the thing: Code review also does not prevent subtle bugs from creeping into the code base when humans wrote the code. Review is just one of the tools that ensure software quality.

            This is to say that code written by LLMs and humans suffer from similar issues, require similar care and review and can fail in similar ways. There is more LLM code, though, and there are new challenges because scaling with LLMs works differently than with humans.

            matt@toot.cafeM 1 Reply Last reply
            0
            • hanshuebner@mastodon.socialH hanshuebner@mastodon.social

              @matt @dalias You are absolutely right, but here's the thing: Code review also does not prevent subtle bugs from creeping into the code base when humans wrote the code. Review is just one of the tools that ensure software quality.

              This is to say that code written by LLMs and humans suffer from similar issues, require similar care and review and can fail in similar ways. There is more LLM code, though, and there are new challenges because scaling with LLMs works differently than with humans.

              matt@toot.cafeM This user is from outside of this forum
              matt@toot.cafeM This user is from outside of this forum
              matt@toot.cafe
              wrote last edited by
              #88

              @hanshuebner @dalias Isn't it obvious, though, that the risks are higher when you have an LLM generate code statistically from a natural-language prompt, as opposed to writing the code and paying attention to every detail yourself?

              hanshuebner@mastodon.socialH 1 Reply Last reply
              0
              • matt@toot.cafeM matt@toot.cafe

                @hanshuebner @dalias Isn't it obvious, though, that the risks are higher when you have an LLM generate code statistically from a natural-language prompt, as opposed to writing the code and paying attention to every detail yourself?

                hanshuebner@mastodon.socialH This user is from outside of this forum
                hanshuebner@mastodon.socialH This user is from outside of this forum
                hanshuebner@mastodon.social
                wrote last edited by
                #89

                @matt @dalias Statistically, you will have more bugs because you have more software. But also, you can easily create tests, refactor and make executable requirements.

                Making good software with LLM support is hard work and takes time. If you look at the stuff that people make with three prompts and then post to LinkedIn, you know what I mean.

                A good program requires attention to detail, no matter what the tool does for you.

                matt@toot.cafeM 1 Reply Last reply
                0
                • hanshuebner@mastodon.socialH hanshuebner@mastodon.social

                  @jmax @flooper @plexus I don't believe that "getting stuff done" is an ideology, but rather the reality under which every worker lives in capitalism. We're not getting paid for doing the right or the good thing, we're paid for getting the work done that the man wants us to do.

                  jmax@mastodon.socialJ This user is from outside of this forum
                  jmax@mastodon.socialJ This user is from outside of this forum
                  jmax@mastodon.social
                  wrote last edited by
                  #90

                  @hanshuebner @flooper @plexus And if your view of the world begins and ends with making money, as I admit is capitalist dogma, fair enough.

                  But producing code with LLMs - or using them for anything which needs to be correct - is deception (whether you're deceiving yourself or others) on a massive scale, on a par with crypto, Ponzi schemes, climate denial, etc.

                  (1/2)

                  hanshuebner@mastodon.socialH jmax@mastodon.socialJ 2 Replies Last reply
                  0
                  • hanshuebner@mastodon.socialH hanshuebner@mastodon.social

                    @matt @dalias Statistically, you will have more bugs because you have more software. But also, you can easily create tests, refactor and make executable requirements.

                    Making good software with LLM support is hard work and takes time. If you look at the stuff that people make with three prompts and then post to LinkedIn, you know what I mean.

                    A good program requires attention to detail, no matter what the tool does for you.

                    matt@toot.cafeM This user is from outside of this forum
                    matt@toot.cafeM This user is from outside of this forum
                    matt@toot.cafe
                    wrote last edited by
                    #91

                    @hanshuebner @dalias So then why do it with an LLM as opposed to the hard work of writing the code directly? Is it just to appease capital's irrational demands?

                    hanshuebner@mastodon.socialH 1 Reply Last reply
                    0
                    • matt@toot.cafeM matt@toot.cafe

                      @hanshuebner @dalias So then why do it with an LLM as opposed to the hard work of writing the code directly? Is it just to appease capital's irrational demands?

                      hanshuebner@mastodon.socialH This user is from outside of this forum
                      hanshuebner@mastodon.socialH This user is from outside of this forum
                      hanshuebner@mastodon.social
                      wrote last edited by
                      #92

                      @matt @dalias You use an LLM because it makes the code writing part take radically less time.

                      matt@toot.cafeM 1 Reply Last reply
                      0
                      • jmax@mastodon.socialJ jmax@mastodon.social

                        @hanshuebner @flooper @plexus And if your view of the world begins and ends with making money, as I admit is capitalist dogma, fair enough.

                        But producing code with LLMs - or using them for anything which needs to be correct - is deception (whether you're deceiving yourself or others) on a massive scale, on a par with crypto, Ponzi schemes, climate denial, etc.

                        (1/2)

                        hanshuebner@mastodon.socialH This user is from outside of this forum
                        hanshuebner@mastodon.socialH This user is from outside of this forum
                        hanshuebner@mastodon.social
                        wrote last edited by
                        #93

                        @jmax @flooper @plexus I'm not sure how you feed yourself and your kids. Maybe you are rich and don't have to worry about that. I'm not all that privileged.

                        jmax@mastodon.socialJ 1 Reply Last reply
                        0
                        • jmax@mastodon.socialJ jmax@mastodon.social

                          @hanshuebner @flooper @plexus And if your view of the world begins and ends with making money, as I admit is capitalist dogma, fair enough.

                          But producing code with LLMs - or using them for anything which needs to be correct - is deception (whether you're deceiving yourself or others) on a massive scale, on a par with crypto, Ponzi schemes, climate denial, etc.

                          (1/2)

                          jmax@mastodon.socialJ This user is from outside of this forum
                          jmax@mastodon.socialJ This user is from outside of this forum
                          jmax@mastodon.social
                          wrote last edited by
                          #94

                          @hanshuebner @flooper

                          Anthropomorphizing them (as many do, but I don't think you are) is a flawed view, but does provide one useful insight.

                          If one treats an LLM as a person, then the fundamental issue is:

                          They are a bullshit artist with a huge library. They do not have competence at anything except bullshitting, at which they are superb.

                          I agree that it's amazing that we can build a mechanical bullshit generator that's good enough to bypass most people's defenses.

                          hanshuebner@mastodon.socialH 1 Reply Last reply
                          0
                          • plexus@toot.catP plexus@toot.cat

                            It's clear that AI assisted coding is dividing developers (welcome to the culture wars!). I've seen a few blog posts now that talk about how some people just "love the craft", "delight in making something just right, like knitting", etc, as opposed to people who just "want to make it work". As if that explains the divide.

                            How about this, some people resent the notion of being a babysitter to a stochastic token machine, hastening their own cognitive decline. Some people resent paying rent to a handful of US companies, all coming directly out of the TESCREAL human extinction cult, to be able to write software. Some people resent the "worse is better" steady decline of software quality over the past two decades, now supercharged. Some people resent that the hegemonic computing ecosystem is entirely shaped by the logic of venture capital. Some people hate that the digital commons is walled off and sold back to us. Oh and I guess some people also don't like the thought of making coding several orders of magnitude more energy intensive during a climate emergency.

                            But sure, no, it's really because we mourn the loss of our hobby.

                            fenixmaster@mastodon.socialF This user is from outside of this forum
                            fenixmaster@mastodon.socialF This user is from outside of this forum
                            fenixmaster@mastodon.social
                            wrote last edited by
                            #95

                            @plexus Because AI did not create a programming language, because AI did not create a compiler, because AI did not create a linker, AI can not create software.

                            1 Reply Last reply
                            0
                            • hanshuebner@mastodon.socialH hanshuebner@mastodon.social

                              @matt @dalias You use an LLM because it makes the code writing part take radically less time.

                              matt@toot.cafeM This user is from outside of this forum
                              matt@toot.cafeM This user is from outside of this forum
                              matt@toot.cafe
                              wrote last edited by
                              #96

                              @hanshuebner @dalias But then you have to spend time putting guardrails in place (e.g. comprehensive tests) to make sure the LLM doesn't do something wrong; using an LLM is rolling the dice, after all. Now, if you believe that one should always put maximal guardrails in place anyway even for human-written code, then I suppose the faster code generation could still be a net gain. But I'm not sure there's one correct answer to how much one should invest in guardrails (tests, types, lints, etc.).

                              matt@toot.cafeM hanshuebner@mastodon.socialH 2 Replies Last reply
                              0
                              • hanshuebner@mastodon.socialH hanshuebner@mastodon.social

                                @jmax @flooper @plexus I'm not sure how you feed yourself and your kids. Maybe you are rich and don't have to worry about that. I'm not all that privileged.

                                jmax@mastodon.socialJ This user is from outside of this forum
                                jmax@mastodon.socialJ This user is from outside of this forum
                                jmax@mastodon.social
                                wrote last edited by
                                #97

                                @hanshuebner @flooper @plexus I work for a living and try to avoid dishonesty while doing so.

                                Since I understand that LLMs are fundamentally and inherently dishonest, that doesn't leave much wiggle room for me.

                                hanshuebner@mastodon.socialH 1 Reply Last reply
                                0
                                • plexus@toot.catP plexus@toot.cat

                                  It's clear that AI assisted coding is dividing developers (welcome to the culture wars!). I've seen a few blog posts now that talk about how some people just "love the craft", "delight in making something just right, like knitting", etc, as opposed to people who just "want to make it work". As if that explains the divide.

                                  How about this, some people resent the notion of being a babysitter to a stochastic token machine, hastening their own cognitive decline. Some people resent paying rent to a handful of US companies, all coming directly out of the TESCREAL human extinction cult, to be able to write software. Some people resent the "worse is better" steady decline of software quality over the past two decades, now supercharged. Some people resent that the hegemonic computing ecosystem is entirely shaped by the logic of venture capital. Some people hate that the digital commons is walled off and sold back to us. Oh and I guess some people also don't like the thought of making coding several orders of magnitude more energy intensive during a climate emergency.

                                  But sure, no, it's really because we mourn the loss of our hobby.

                                  lumi@snug.moeL This user is from outside of this forum
                                  lumi@snug.moeL This user is from outside of this forum
                                  lumi@snug.moe
                                  wrote last edited by
                                  #98

                                  @plexus and i also feel we should be standing in solidarity with other affected professions to form a unified front against all generative "ai"

                                  stand together with artists, writers, journalists, translators, etc etc against this morally corrupt technology

                                  flesh@transfem.socialF 1 Reply Last reply
                                  0
                                  • jmax@mastodon.socialJ jmax@mastodon.social

                                    @hanshuebner @flooper

                                    Anthropomorphizing them (as many do, but I don't think you are) is a flawed view, but does provide one useful insight.

                                    If one treats an LLM as a person, then the fundamental issue is:

                                    They are a bullshit artist with a huge library. They do not have competence at anything except bullshitting, at which they are superb.

                                    I agree that it's amazing that we can build a mechanical bullshit generator that's good enough to bypass most people's defenses.

                                    hanshuebner@mastodon.socialH This user is from outside of this forum
                                    hanshuebner@mastodon.socialH This user is from outside of this forum
                                    hanshuebner@mastodon.social
                                    wrote last edited by
                                    #99

                                    @jmax @flooper I think I'm with you. The difficult part of LLMs for code generation for me is that the bullshit is executable. I can and do dismiss AI "prose", "art" and "music" easily because it is devoid of what makes me want to consume the thing in the first place. Code is primarirly consumed by machines, however, and its primary purpose is the functionality that it provides. That sets it apart from other slop.

                                    jmax@mastodon.socialJ 1 Reply Last reply
                                    0
                                    • jmax@mastodon.socialJ jmax@mastodon.social

                                      @hanshuebner @flooper @plexus I work for a living and try to avoid dishonesty while doing so.

                                      Since I understand that LLMs are fundamentally and inherently dishonest, that doesn't leave much wiggle room for me.

                                      hanshuebner@mastodon.socialH This user is from outside of this forum
                                      hanshuebner@mastodon.socialH This user is from outside of this forum
                                      hanshuebner@mastodon.social
                                      wrote last edited by
                                      #100

                                      @jmax @flooper Machines don't have a concept of honesty, but I think I know what you mean. Thank you for participating in this exchange!

                                      jmax@mastodon.socialJ 1 Reply Last reply
                                      0
                                      • matt@toot.cafeM matt@toot.cafe

                                        @hanshuebner @dalias But then you have to spend time putting guardrails in place (e.g. comprehensive tests) to make sure the LLM doesn't do something wrong; using an LLM is rolling the dice, after all. Now, if you believe that one should always put maximal guardrails in place anyway even for human-written code, then I suppose the faster code generation could still be a net gain. But I'm not sure there's one correct answer to how much one should invest in guardrails (tests, types, lints, etc.).

                                        matt@toot.cafeM This user is from outside of this forum
                                        matt@toot.cafeM This user is from outside of this forum
                                        matt@toot.cafe
                                        wrote last edited by
                                        #101

                                        @hanshuebner For example, I write in Rust. I find that I never again want to do without the strong static typing, the controlled mutability, and the borrow checker. But @dalias writes excellent C code without these things. Would I trust an LLM to write C code like Rich does? Never. My point is that if the code is written by skilled humans, you don't necessarily need guardrails to the extent that you do for LLM-extruded code. So do LLMs really save time, *for high-quality code*? I'm skeptical.

                                        hanshuebner@mastodon.socialH 1 Reply Last reply
                                        0
                                        • schaueho@functional.cafeS schaueho@functional.cafe

                                          @hanshuebner What does "software is better" even mean in this context?

                                          I wonder if this entire "LLM generated code is good enough and it's creation is much more efficient" argument will stand the test of time when a lot of code is generated on the same product / project by many people. We do not know the answer to this yet.
                                          @grishka

                                          dusk@todon.euD This user is from outside of this forum
                                          dusk@todon.euD This user is from outside of this forum
                                          dusk@todon.eu
                                          wrote last edited by
                                          #102

                                          @schaueho @hanshuebner @grishka

                                          1/2

                                          > "LLM generated code is good enough and it's creation is much more efficient" argument will stand the test of time when a lot of code is generated on the same product / project by many people. We do not know the answer to this yet.

                                          I do suspect that some of the divide we see in the debate on Mastodon relates to the fact that some of the people arguing against it have not used LLMs to assist in writing Very Good Software At Scale using the methodologies available today.

                                          I ship software to ~6 million monthly active users, with confidence, using Claude Code. I haven't written code by hand in ~10 months.

                                          So, to the "We do not know the answer to this yet.", I think that we do.

                                          We know that LLMs, used naively, make mistakes. And a craftsperson who knows the limitation of their tools (LLMs) can mitigate and verify in a number of ways.

                                          Concession: LLMs were not ethically trained. Data centers are having awful impact on the energy grid + water use. I will never begrudge a person's choice to boycott.

                                          Counterpoint: today, LLMs running on e.g. Apple Silicon approach the performance of the SOTA models. We're gonna see more of this, which will mitigate the individual's environmental impact as well as the need to pay forever-rent to big tech.

                                          dusk@todon.euD 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups