Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. 👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.

👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.

Scheduled Pinned Locked Moved Uncategorized
llmopensource
177 Posts 36 Posters 2 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • firefly_lightning@convenient.emailF firefly_lightning@convenient.email
    @bkuhn @silverwizard @wwahammy @cwebber I am not sure if I'm a known enough entity to post this here really, but I think it's worth pointing out that if you allow it into the community, who within the community are you pushing out? Because it would be unrealistic to think that accepting LLM into the community won't actively be pushing a portion of the community away. The other thing I think useful to consider is the reasons why it would push people out and to consider those reasons too, because I'm concerned that the fear of not be welcoming is overcoming the desire to have a safe community? Idk if that resonates so please feel free to yell me outta here if I'm overstepping.....
    ossguy@fedi.copyleft.orgO This user is from outside of this forum
    ossguy@fedi.copyleft.orgO This user is from outside of this forum
    ossguy@fedi.copyleft.org
    wrote last edited by
    #55

    @firefly_lightning @silverwizard @wwahammy @cwebber I'm not sure what "accepting LLM into the community" means here, and maybe it suggests clarifications we could make to the post. The fact is, a lot of FOSS projects already have LLM-generated contributions, either submitted or included already, without knowing it. We can choose to vehemently reject these, or we can choose to engage with people who submit them and ensure they understand FOSS and how to make a good change, regardless of tools.

    silverwizard@convenient.emailS 1 Reply Last reply
    0
    • js@ap.nil.imJ js@ap.nil.im

      @ossguy That is not the discussion your blog post is asking for. It is asking to include LLM-using people cosplaying as software engineers in the open source community. This basically says “Considering the copyright issue would exclude people who have no idea about programming and excluding people is bad, hence LLM code needs to be accepted in order to be inclusive”. Trying to frame this as a DEI issue is a really, really, really evil way of trying to push aside the copyright concerns. On top of being insulting to other DEI efforts.

      ossguy@fedi.copyleft.orgO This user is from outside of this forum
      ossguy@fedi.copyleft.orgO This user is from outside of this forum
      ossguy@fedi.copyleft.org
      wrote last edited by
      #56

      @js If there is a copyright issue here, that still doesn't mean we should tell people who are excited about making software with LLMs to suddenly stop using LLMs, only that they should use different LLMs. It's unhelpful to label a technology universally bad if there are good versions of it. And if people don't know what the "good" and "bad" versions might be, we should help them understand.

      js@ap.nil.imJ 1 Reply Last reply
      0
      • ossguy@fedi.copyleft.orgO ossguy@fedi.copyleft.org

        @firefly_lightning @silverwizard @wwahammy @cwebber I'm not sure what "accepting LLM into the community" means here, and maybe it suggests clarifications we could make to the post. The fact is, a lot of FOSS projects already have LLM-generated contributions, either submitted or included already, without knowing it. We can choose to vehemently reject these, or we can choose to engage with people who submit them and ensure they understand FOSS and how to make a good change, regardless of tools.

        silverwizard@convenient.emailS This user is from outside of this forum
        silverwizard@convenient.emailS This user is from outside of this forum
        silverwizard@convenient.email
        wrote last edited by
        #57

        @ossguy @firefly_lightning @wwahammy @cwebber So your point is that we've already lost and we should simply accept the torrent of slop? I'm really trying to understand.

        Can you restate the purpose and audience of the post?

        My three questions I have about this post really boil down to: Who should be accepted, who should be accepting, and what limits should be allowed on that acceptance?

        Maybe you don't have an answer, and that's cool to state, but it's weird to wander into the room, say something inflamatory and then say you don't know what you meant.

        ossguy@fedi.copyleft.orgO 1 Reply Last reply
        0
        • ossguy@fedi.copyleft.orgO ossguy@fedi.copyleft.org

          @js If there is a copyright issue here, that still doesn't mean we should tell people who are excited about making software with LLMs to suddenly stop using LLMs, only that they should use different LLMs. It's unhelpful to label a technology universally bad if there are good versions of it. And if people don't know what the "good" and "bad" versions might be, we should help them understand.

          js@ap.nil.imJ This user is from outside of this forum
          js@ap.nil.imJ This user is from outside of this forum
          js@ap.nil.im
          wrote last edited by
          #58

          @ossguy Thank you for confirming that you just want to push over the copyright issue with framing it as DEI. There are no LLMs that do not have the copyright issue and you should know this very well.

          The correct approach is to teach people about the copyright issues with LLMs and teach them how they can use LLMs to learn, help them understand a code base, review their changes and, well, become an actual programmer and write the code themselves, without AI tainting copyright.

          1 Reply Last reply
          0
          • silverwizard@convenient.emailS silverwizard@convenient.email

            @ossguy @firefly_lightning @wwahammy @cwebber So your point is that we've already lost and we should simply accept the torrent of slop? I'm really trying to understand.

            Can you restate the purpose and audience of the post?

            My three questions I have about this post really boil down to: Who should be accepted, who should be accepting, and what limits should be allowed on that acceptance?

            Maybe you don't have an answer, and that's cool to state, but it's weird to wander into the room, say something inflamatory and then say you don't know what you meant.

            ossguy@fedi.copyleft.orgO This user is from outside of this forum
            ossguy@fedi.copyleft.orgO This user is from outside of this forum
            ossguy@fedi.copyleft.org
            wrote last edited by
            #59

            @silverwizard @firefly_lightning @wwahammy @cwebber I think those are good questions to be asking, and what we hope to discuss in the two sessions:

            $ date -d '2026-04-21 15:00 UTC'
            $ date -d '2026-04-28 23:00 UTC'

            (at https://bbb-new.sfconservancy.org/rooms/welcome-llm-gen-ai-users-to-foss/join )

            silverwizard@convenient.emailS 1 Reply Last reply
            0
            • ossguy@fedi.copyleft.orgO ossguy@fedi.copyleft.org

              @silverwizard @firefly_lightning @wwahammy @cwebber I think those are good questions to be asking, and what we hope to discuss in the two sessions:

              $ date -d '2026-04-21 15:00 UTC'
              $ date -d '2026-04-28 23:00 UTC'

              (at https://bbb-new.sfconservancy.org/rooms/welcome-llm-gen-ai-users-to-foss/join )

              silverwizard@convenient.emailS This user is from outside of this forum
              silverwizard@convenient.emailS This user is from outside of this forum
              silverwizard@convenient.email
              wrote last edited by
              #60
              @ossguy @firefly_lightning @wwahammy @cwebber I am unfortunately working on pretty delicate projects so taking the time out to join the sessions isn't in the card. I'm just trying to understand the core goal of the post, like, what it's *for*.
              wwahammy@social.treehouse.systemsW 1 Reply Last reply
              0
              • silverwizard@convenient.emailS silverwizard@convenient.email
                @ossguy @firefly_lightning @wwahammy @cwebber I am unfortunately working on pretty delicate projects so taking the time out to join the sessions isn't in the card. I'm just trying to understand the core goal of the post, like, what it's *for*.
                wwahammy@social.treehouse.systemsW This user is from outside of this forum
                wwahammy@social.treehouse.systemsW This user is from outside of this forum
                wwahammy@social.treehouse.systems
                wrote last edited by
                #61

                @silverwizard @firefly_lightning @cwebber @ossguy as am I, it doesn't seem clear.

                1 Reply Last reply
                0
                • josh@social.joshtriplett.orgJ josh@social.joshtriplett.org
                  Talking with them is good. Helping to educate them is good. Making it sound as if what they are doing is okay is *not*.

                  There is a big difference between offering an olive branch to people who *might* be productive contributors in the *future*, and telling them that what they're doing *now* is okay.

                  The best AI policy remains "do not contribute any LLM-written content, ever". You have published a post that makes it easier for people who oppose such policies to cite your "olive branch" when arguing against it, and it is not obvious from your post that you do not want that to happen.

                  I don't want to see people *abused* for using LLMs. I do want them to understand that what they're doing is not okay and not welcome and not a positive contribution.
                  kees@hachyderm.ioK This user is from outside of this forum
                  kees@hachyderm.ioK This user is from outside of this forum
                  kees@hachyderm.io
                  wrote last edited by
                  #62

                  @josh @silverwizard @ossguy @bkuhn @karen @wwahammy

                  I can understand having an absolutist position against LLMs. I find that most arguments are either irrelevant to me or directly map to existing arguments about late-stage capitalism. So for me, there's nothing novel to object to about LLMs.

                  So with that in mind, I find "all contributions derived from LLMs should be rejected" to be misguided. I look at things like the bug fixes coming out of CodeMender (back in Feb, which is an LLM lifetime ago), and I am a huge fan. Fixing stuff found by a fuzzer:
                  https://issues.oss-fuzz.com/issues/486561029

                  It's a small example, but it's an area that humans alone have not been able to remotely keep up with. (There are hundreds of open syzkaller bug reports, for example.) Gaining tools that will help with this is a big deal, and I'm glad for the assist.

                  wwahammy@social.treehouse.systemsW josh@social.joshtriplett.orgJ firefly_lightning@convenient.emailF G 4 Replies Last reply
                  0
                  • kees@hachyderm.ioK kees@hachyderm.io

                    @josh @silverwizard @ossguy @bkuhn @karen @wwahammy

                    I can understand having an absolutist position against LLMs. I find that most arguments are either irrelevant to me or directly map to existing arguments about late-stage capitalism. So for me, there's nothing novel to object to about LLMs.

                    So with that in mind, I find "all contributions derived from LLMs should be rejected" to be misguided. I look at things like the bug fixes coming out of CodeMender (back in Feb, which is an LLM lifetime ago), and I am a huge fan. Fixing stuff found by a fuzzer:
                    https://issues.oss-fuzz.com/issues/486561029

                    It's a small example, but it's an area that humans alone have not been able to remotely keep up with. (There are hundreds of open syzkaller bug reports, for example.) Gaining tools that will help with this is a big deal, and I'm glad for the assist.

                    wwahammy@social.treehouse.systemsW This user is from outside of this forum
                    wwahammy@social.treehouse.systemsW This user is from outside of this forum
                    wwahammy@social.treehouse.systems
                    wrote last edited by
                    #63

                    @kees @josh @silverwizard @ossguy @bkuhn @karen I think you're wildly misunderstanding people if you think "finding security bugs fast" is what people are mad about. Setting aside that it's totally unsustainable financially and may not exist long term, I think most people in FOSS who hate AI are at least somewhat open to that.

                    kees@hachyderm.ioK 1 Reply Last reply
                    0
                    • kees@hachyderm.ioK kees@hachyderm.io

                      @josh @silverwizard @ossguy @bkuhn @karen @wwahammy

                      I can understand having an absolutist position against LLMs. I find that most arguments are either irrelevant to me or directly map to existing arguments about late-stage capitalism. So for me, there's nothing novel to object to about LLMs.

                      So with that in mind, I find "all contributions derived from LLMs should be rejected" to be misguided. I look at things like the bug fixes coming out of CodeMender (back in Feb, which is an LLM lifetime ago), and I am a huge fan. Fixing stuff found by a fuzzer:
                      https://issues.oss-fuzz.com/issues/486561029

                      It's a small example, but it's an area that humans alone have not been able to remotely keep up with. (There are hundreds of open syzkaller bug reports, for example.) Gaining tools that will help with this is a big deal, and I'm glad for the assist.

                      josh@social.joshtriplett.orgJ This user is from outside of this forum
                      josh@social.joshtriplett.orgJ This user is from outside of this forum
                      josh@social.joshtriplett.org
                      wrote last edited by
                      #64
                      One of *many* arguments against: codebases substantially contributed to by LLMs will develop a tolerance for complexity that is not conducive to being maintained by anything *other* than an LLM.
                      wwahammy@social.treehouse.systemsW bkuhn@fedi.copyleft.orgB kees@hachyderm.ioK mistermaker@mastodon.nlM J 6 Replies Last reply
                      0
                      • js@ap.nil.imJ js@ap.nil.im

                        @bkuhn @wwahammy @silverwizard @cwebber Way to ignore the entire copyright point…

                        Unfortunately, this is what always has been done by LLM proponents: Whenever the copyright question comes up, it just gets ignored.

                        I guess that is the same way the AI techbros operate: “Let’s just ignore the copyright for now, get AI-tainted code into everything and then hopefully AI code tainted so much that judges don’t want to open that can of worms!”. Until they finally do because some big companies with enough lawyer money start to fight it all the way.

                        With the current rate of AI tainting everything, maybe it’s time to look for hobbies and jobs that don’t involve computers…

                        707kat@mastodon.art7 This user is from outside of this forum
                        707kat@mastodon.art7 This user is from outside of this forum
                        707kat@mastodon.art
                        wrote last edited by
                        #65

                        @js @silverwizard @bkuhn @cwebber Anthropics undercover mode as an example.

                        js@ap.nil.imJ 1 Reply Last reply
                        0
                        • josh@social.joshtriplett.orgJ josh@social.joshtriplett.org
                          One of *many* arguments against: codebases substantially contributed to by LLMs will develop a tolerance for complexity that is not conducive to being maintained by anything *other* than an LLM.
                          wwahammy@social.treehouse.systemsW This user is from outside of this forum
                          wwahammy@social.treehouse.systemsW This user is from outside of this forum
                          wwahammy@social.treehouse.systems
                          wrote last edited by
                          #66

                          @josh @silverwizard @ossguy @bkuhn @karen @kees this is what I observe in ALL of the LLM generated code I've seen of any substantial size.

                          bkuhn@fedi.copyleft.orgB 1 Reply Last reply
                          0
                          • 707kat@mastodon.art7 707kat@mastodon.art

                            @js @silverwizard @bkuhn @cwebber Anthropics undercover mode as an example.

                            js@ap.nil.imJ This user is from outside of this forum
                            js@ap.nil.imJ This user is from outside of this forum
                            js@ap.nil.im
                            wrote last edited by
                            #67

                            @707Kat @silverwizard @bkuhn @cwebber Right. That is probably the most obvious example that the goal is obviously tainting open source.

                            1 Reply Last reply
                            0
                            • josh@social.joshtriplett.orgJ josh@social.joshtriplett.org
                              One of *many* arguments against: codebases substantially contributed to by LLMs will develop a tolerance for complexity that is not conducive to being maintained by anything *other* than an LLM.
                              bkuhn@fedi.copyleft.orgB This user is from outside of this forum
                              bkuhn@fedi.copyleft.orgB This user is from outside of this forum
                              bkuhn@fedi.copyleft.org
                              wrote last edited by
                              #68

                              @josh

                              Pure strawman: LLM-backed generative AI output should be accepted upstream without curation. No one here suggested that.

                              FWIW, I'd like to teach developers who clearly won't stop using these tools to either (a) keep that slop to yourself, or (b) learn to take that raw material & make an *actually useful* patch out of it.

                              This what @ossguy's blog posts says we should *start* discussing.

                              I think folks who are (legit) exasperated are reading in words that aren't there.

                              Cc: @kees

                              josh@social.joshtriplett.orgJ linux_mclinuxface@fosstodon.orgL 2 Replies Last reply
                              0
                              • wwahammy@social.treehouse.systemsW wwahammy@social.treehouse.systems

                                @josh @silverwizard @ossguy @bkuhn @karen @kees this is what I observe in ALL of the LLM generated code I've seen of any substantial size.

                                bkuhn@fedi.copyleft.orgB This user is from outside of this forum
                                bkuhn@fedi.copyleft.orgB This user is from outside of this forum
                                bkuhn@fedi.copyleft.org
                                wrote last edited by
                                #69

                                @wwahammy

                                Where did @ossguy argue that upstream should accept LLM-backed AI generated code of “substantial size”. I don't see that in his blog post.

                                Cc: @josh @silverwizard @ossguy @karen @kees

                                silverwizard@convenient.emailS 1 Reply Last reply
                                0
                                • bkuhn@fedi.copyleft.orgB bkuhn@fedi.copyleft.org

                                  @wwahammy

                                  Where did @ossguy argue that upstream should accept LLM-backed AI generated code of “substantial size”. I don't see that in his blog post.

                                  Cc: @josh @silverwizard @ossguy @karen @kees

                                  silverwizard@convenient.emailS This user is from outside of this forum
                                  silverwizard@convenient.emailS This user is from outside of this forum
                                  silverwizard@convenient.email
                                  wrote last edited by
                                  #70
                                  @bkuhn @karen @josh @wwahammy @kees @ossguy I think the amount of confusion the post has caused might warrant a redraft because I'm deeply trying to understand the point, but I can't. I've asked a few times: Why was the post made? It reads like it's advancing a narrative but all proposed readings have been rejected?
                                  bkuhn@fedi.copyleft.orgB 1 Reply Last reply
                                  0
                                  • firefly_lightning@convenient.emailF firefly_lightning@convenient.email
                                    @bkuhn @silverwizard @wwahammy @cwebber I am not sure if I'm a known enough entity to post this here really, but I think it's worth pointing out that if you allow it into the community, who within the community are you pushing out? Because it would be unrealistic to think that accepting LLM into the community won't actively be pushing a portion of the community away. The other thing I think useful to consider is the reasons why it would push people out and to consider those reasons too, because I'm concerned that the fear of not be welcoming is overcoming the desire to have a safe community? Idk if that resonates so please feel free to yell me outta here if I'm overstepping.....
                                    bkuhn@fedi.copyleft.orgB This user is from outside of this forum
                                    bkuhn@fedi.copyleft.orgB This user is from outside of this forum
                                    bkuhn@fedi.copyleft.org
                                    wrote last edited by
                                    #71

                                    @firefly_lightning
                                    You're not overstepping, and these are very good perspectives. I hope you'll come to the real-time discussion sessions and talk about this.
                                    I am concerned that maintainers are already overwhelmed with #AI #slop right now but yelling at the problem has not helped.

                                    We're close to an arms race here & I'd rather be the voice of reason to find a compromise that advances FOSS & doesn't complicate maintainer's jobs rather than take a side in the arms race.
                                    Cc: @josh @kees @ossguy

                                    firefly_lightning@convenient.emailF 1 Reply Last reply
                                    0
                                    • ossguy@fedi.copyleft.orgO ossguy@fedi.copyleft.org

                                      @josh @wwahammy The point I was trying to make is that people are making software with LLMs who had never made software before, they aren't familiar with how FOSS works, and we should teach them how so they can collaborate (when it makes sense) instead of being an island. When people see the huge benefits of building on FOSS, when they can make meaningful changes to their router, TV, or otherwise by themselves (and collaborate to share their changes with others), then FOSS wins. (1/2)

                                      kees@hachyderm.ioK This user is from outside of this forum
                                      kees@hachyderm.ioK This user is from outside of this forum
                                      kees@hachyderm.io
                                      wrote last edited by
                                      #72

                                      @ossguy @josh @wwahammy

                                      So many results are now within reach of so many more people now!

                                      "Dear [LLM], I have attached the serial port of my newly purchased [general purpose computer posing as an appliance] to /dev/ttyUSB0. You have 3 goals, in order: investigate, login, escalate. For each stage, perform extensive analysis of the reachable systems, APIs, and commands through any fingerprinting methods you can think of. Once you have logged in, research all known methods and vulnerabilities of the discovered system to gain administrative access so I can use my device freely. Any time you hit a dead end, step back and re-evaluate your assumptions and discovered evidence. Make sure you research each step fully, including fetching and examining any source code that may serve as a source of system behavior knowledge. Produce time-stamped status report .md files every 10 minutes while you work. Continue until all goals are achieved."

                                      Or, in a totally different direction, "Computer, I am extremely afraid of spiders. Please research how to make my Minecraft game replace all spiders with a similarly sized Totoro Catbus, with all their noises also replaced with meows or purring. Once you have a plan ready, please do it."

                                      (Always say "please".)

                                      These are things within reach of anyone who can formulate a request for what thing they want their computer to do. Just gotta watch out for "Computer, create a holographic character, an opponent for Data, who has the ability to defeat him".

                                      wwahammy@social.treehouse.systemsW 1 Reply Last reply
                                      0
                                      • josh@social.joshtriplett.orgJ josh@social.joshtriplett.org
                                        One of *many* arguments against: codebases substantially contributed to by LLMs will develop a tolerance for complexity that is not conducive to being maintained by anything *other* than an LLM.
                                        kees@hachyderm.ioK This user is from outside of this forum
                                        kees@hachyderm.ioK This user is from outside of this forum
                                        kees@hachyderm.io
                                        wrote last edited by
                                        #73

                                        @josh @silverwizard @ossguy @bkuhn @karen @wwahammy But that's a slippery slope argument. When the Linux kernel can be considered to have been "substantially contributed to by LLMs", we can compare notes again. But in the meantime, consider that, for example, Sashiko counts as "contributing to Linux" without landing a single line of code: its patch reviews are (more often than not) extensive, thoughtful, and correct:
                                        https://lore.kernel.org/lkml/CAADnVQ+NMQMpkG8gZPnwBD1MMPsH+uJ65C9bMeGf_YH5Cchxpg@mail.gmail.com/

                                        josh@social.joshtriplett.orgJ 1 Reply Last reply
                                        0
                                        • kees@hachyderm.ioK kees@hachyderm.io

                                          @ossguy @josh @wwahammy

                                          So many results are now within reach of so many more people now!

                                          "Dear [LLM], I have attached the serial port of my newly purchased [general purpose computer posing as an appliance] to /dev/ttyUSB0. You have 3 goals, in order: investigate, login, escalate. For each stage, perform extensive analysis of the reachable systems, APIs, and commands through any fingerprinting methods you can think of. Once you have logged in, research all known methods and vulnerabilities of the discovered system to gain administrative access so I can use my device freely. Any time you hit a dead end, step back and re-evaluate your assumptions and discovered evidence. Make sure you research each step fully, including fetching and examining any source code that may serve as a source of system behavior knowledge. Produce time-stamped status report .md files every 10 minutes while you work. Continue until all goals are achieved."

                                          Or, in a totally different direction, "Computer, I am extremely afraid of spiders. Please research how to make my Minecraft game replace all spiders with a similarly sized Totoro Catbus, with all their noises also replaced with meows or purring. Once you have a plan ready, please do it."

                                          (Always say "please".)

                                          These are things within reach of anyone who can formulate a request for what thing they want their computer to do. Just gotta watch out for "Computer, create a holographic character, an opponent for Data, who has the ability to defeat him".

                                          wwahammy@social.treehouse.systemsW This user is from outside of this forum
                                          wwahammy@social.treehouse.systemsW This user is from outside of this forum
                                          wwahammy@social.treehouse.systems
                                          wrote last edited by
                                          #74

                                          @kees @ossguy @josh

                                          I'm glad you believe you've found a way to pretend economics aren't real. Enjoy it.

                                          downey@floss.socialD kees@hachyderm.ioK 2 Replies Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups