Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. Quality, Velocity, Open Contribution — pick two.

Quality, Velocity, Open Contribution — pick two.

Scheduled Pinned Locked Moved Uncategorized
20 Posts 8 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • meowray@hachyderm.ioM This user is from outside of this forum
    meowray@hachyderm.ioM This user is from outside of this forum
    meowray@hachyderm.io
    wrote last edited by
    #1

    Quality, Velocity, Open Contribution — pick two. If you try for all three, you get none — the maintainers burn out, the project becomes unsustainable.
    Lua and SQLite picked quality, and dropped both velocity and open contribution.
    When your project is mature enough, you can afford to.
    For a project like LLVM, open contribution is not optional — so you're really choosing between quality and velocity.
    LLM-aided development dramatically increases contribution volume without increasing reviewer capacity.
    LLM-aided review may help at the margins — catching mechanical issues, summarizing patches — but the core bottleneck is human judgment.

    chandlerc@hachyderm.ioC 1 Reply Last reply
    0
    • meowray@hachyderm.ioM meowray@hachyderm.io

      Quality, Velocity, Open Contribution — pick two. If you try for all three, you get none — the maintainers burn out, the project becomes unsustainable.
      Lua and SQLite picked quality, and dropped both velocity and open contribution.
      When your project is mature enough, you can afford to.
      For a project like LLVM, open contribution is not optional — so you're really choosing between quality and velocity.
      LLM-aided development dramatically increases contribution volume without increasing reviewer capacity.
      LLM-aided review may help at the margins — catching mechanical issues, summarizing patches — but the core bottleneck is human judgment.

      chandlerc@hachyderm.ioC This user is from outside of this forum
      chandlerc@hachyderm.ioC This user is from outside of this forum
      chandlerc@hachyderm.io
      wrote last edited by
      #2

      @meowray FWIW, strongly disagree here.

      I think it is entirely possible to have quality, velocity, and open contribution.

      I'm not saying there isn't a tradeoff, but I think the above three can be preserved sufficiently.

      For example, in LLVM, I think the bigger challenge than quality is that people view "contribution" as _much_ more about "sending a patch" and not "reviewing a patch. As a consequence, the project has lost community and cultural prioritization of code review as an active and necessary part of contribution.

      Also, "open contribution" doesn't mean you _have_ to accept contributions. I think a project can still have meaningfully open contribution while insisting contributors balance their contributions between patches and review, and where contributions that are extractive are rejected until the contributor figures out how to make them constructive.

      IMO, criteria for sustaining both quality & velocity in OSS:
      - Strong expectation of _total_ community code review in balance to _total_ new patches -- this means that long-time contributors (maintainers) must do _more_ review than new patches.
      - Strong expectation of patches from new contributors rapidly rising to the quality bar where they are efficient to review and non-extractive.
      - Strong testing culture that ensures a large fraction of quality is mechanically ensured
      - Excellent infrastructure use to provide efficient review and CI so tests are effective

      I think LLVM struggles with the first and last of these. The last is improving recently though!

      shafik@hachyderm.ioS boomanaiden154@hachyderm.ioB pervognsen@mastodon.socialP 4 Replies Last reply
      0
      • chandlerc@hachyderm.ioC chandlerc@hachyderm.io

        @meowray FWIW, strongly disagree here.

        I think it is entirely possible to have quality, velocity, and open contribution.

        I'm not saying there isn't a tradeoff, but I think the above three can be preserved sufficiently.

        For example, in LLVM, I think the bigger challenge than quality is that people view "contribution" as _much_ more about "sending a patch" and not "reviewing a patch. As a consequence, the project has lost community and cultural prioritization of code review as an active and necessary part of contribution.

        Also, "open contribution" doesn't mean you _have_ to accept contributions. I think a project can still have meaningfully open contribution while insisting contributors balance their contributions between patches and review, and where contributions that are extractive are rejected until the contributor figures out how to make them constructive.

        IMO, criteria for sustaining both quality & velocity in OSS:
        - Strong expectation of _total_ community code review in balance to _total_ new patches -- this means that long-time contributors (maintainers) must do _more_ review than new patches.
        - Strong expectation of patches from new contributors rapidly rising to the quality bar where they are efficient to review and non-extractive.
        - Strong testing culture that ensures a large fraction of quality is mechanically ensured
        - Excellent infrastructure use to provide efficient review and CI so tests are effective

        I think LLVM struggles with the first and last of these. The last is improving recently though!

        shafik@hachyderm.ioS This user is from outside of this forum
        shafik@hachyderm.ioS This user is from outside of this forum
        shafik@hachyderm.io
        wrote last edited by
        #3

        @chandlerc @meowray

        You may be technically correct here but the reality is that without sustained long term commitment from large organizations maintaining all three is very challenging.

        It also means many long time contributors w/ the depth of knowledge are withdrawn from the community w/o sufficient replacements being made.

        We know many areas have suffered greatly b/c of this recently and long ago. Recently many long term open source contributors are being sucked up out of the community due to this. Creating an even larger void.

        boomanaiden154@hachyderm.ioB P chandlerc@hachyderm.ioC 3 Replies Last reply
        0
        • chandlerc@hachyderm.ioC chandlerc@hachyderm.io

          @meowray FWIW, strongly disagree here.

          I think it is entirely possible to have quality, velocity, and open contribution.

          I'm not saying there isn't a tradeoff, but I think the above three can be preserved sufficiently.

          For example, in LLVM, I think the bigger challenge than quality is that people view "contribution" as _much_ more about "sending a patch" and not "reviewing a patch. As a consequence, the project has lost community and cultural prioritization of code review as an active and necessary part of contribution.

          Also, "open contribution" doesn't mean you _have_ to accept contributions. I think a project can still have meaningfully open contribution while insisting contributors balance their contributions between patches and review, and where contributions that are extractive are rejected until the contributor figures out how to make them constructive.

          IMO, criteria for sustaining both quality & velocity in OSS:
          - Strong expectation of _total_ community code review in balance to _total_ new patches -- this means that long-time contributors (maintainers) must do _more_ review than new patches.
          - Strong expectation of patches from new contributors rapidly rising to the quality bar where they are efficient to review and non-extractive.
          - Strong testing culture that ensures a large fraction of quality is mechanically ensured
          - Excellent infrastructure use to provide efficient review and CI so tests are effective

          I think LLVM struggles with the first and last of these. The last is improving recently though!

          boomanaiden154@hachyderm.ioB This user is from outside of this forum
          boomanaiden154@hachyderm.ioB This user is from outside of this forum
          boomanaiden154@hachyderm.io
          wrote last edited by
          #4

          @chandlerc @meowray

          I think in addition to code review not being prioritized by the community as much as it should (which seems to happen for a variety of reasons), LLVM also isn’t as good as it could be at turning contributors into maintainers interested in doing review. I don’t have numbers, but it seems like LLVM gets a lot of drive by patches where people might land a couple and then disappear. Even through structured programs like GSoC, it seems like only a couple participants in each cohort go on to stay involved in the project. A lack of people walking the path to maintainership I think leaves existing maintainers significantly more burdened.

          I don’t have a good idea of why we as a community aren’t better at it. I think part of might be that people don’t feel empowered to review code. I wonder if a more structured review/code owner system might help with that. I also think part of it is that developing the expertise to properly review patches for a project like LLVM is an immense effort. I’ve recently been trying to pick up the number of reviews I’m doing to try and help with the review bandwidth problem, and I think the one main thing I’ve learned is how little I understand huge swaths of the main code base I’ve been working on for the past three years.

          chandlerc@hachyderm.ioC 1 Reply Last reply
          0
          • chandlerc@hachyderm.ioC chandlerc@hachyderm.io

            @meowray FWIW, strongly disagree here.

            I think it is entirely possible to have quality, velocity, and open contribution.

            I'm not saying there isn't a tradeoff, but I think the above three can be preserved sufficiently.

            For example, in LLVM, I think the bigger challenge than quality is that people view "contribution" as _much_ more about "sending a patch" and not "reviewing a patch. As a consequence, the project has lost community and cultural prioritization of code review as an active and necessary part of contribution.

            Also, "open contribution" doesn't mean you _have_ to accept contributions. I think a project can still have meaningfully open contribution while insisting contributors balance their contributions between patches and review, and where contributions that are extractive are rejected until the contributor figures out how to make them constructive.

            IMO, criteria for sustaining both quality & velocity in OSS:
            - Strong expectation of _total_ community code review in balance to _total_ new patches -- this means that long-time contributors (maintainers) must do _more_ review than new patches.
            - Strong expectation of patches from new contributors rapidly rising to the quality bar where they are efficient to review and non-extractive.
            - Strong testing culture that ensures a large fraction of quality is mechanically ensured
            - Excellent infrastructure use to provide efficient review and CI so tests are effective

            I think LLVM struggles with the first and last of these. The last is improving recently though!

            pervognsen@mastodon.socialP This user is from outside of this forum
            pervognsen@mastodon.socialP This user is from outside of this forum
            pervognsen@mastodon.social
            wrote last edited by
            #5

            @chandlerc @meowray The review culture thing also came up in https://www.npopov.com/2026/01/11/LLVM-The-bad-parts.html.

            1 Reply Last reply
            0
            • shafik@hachyderm.ioS shafik@hachyderm.io

              @chandlerc @meowray

              You may be technically correct here but the reality is that without sustained long term commitment from large organizations maintaining all three is very challenging.

              It also means many long time contributors w/ the depth of knowledge are withdrawn from the community w/o sufficient replacements being made.

              We know many areas have suffered greatly b/c of this recently and long ago. Recently many long term open source contributors are being sucked up out of the community due to this. Creating an even larger void.

              boomanaiden154@hachyderm.ioB This user is from outside of this forum
              boomanaiden154@hachyderm.ioB This user is from outside of this forum
              boomanaiden154@hachyderm.io
              wrote last edited by
              #6

              @shafik @chandlerc @meowray

              Yep. The incentive structures here are not naturally aligned to building a healthy open source community. Doing reviews for patches that might not provide immediate (or any) benefit is time spent that is harder to justify than time spent working on immediately useful features. But it is immensely important to the health of the community which is the reason any of this can happen in the first place.

              Some places seem to realize this and others do not. I hope more come around as time goes on.

              shafik@hachyderm.ioS meowray@hachyderm.ioM chandlerc@hachyderm.ioC 3 Replies Last reply
              0
              • shafik@hachyderm.ioS shafik@hachyderm.io

                @chandlerc @meowray

                You may be technically correct here but the reality is that without sustained long term commitment from large organizations maintaining all three is very challenging.

                It also means many long time contributors w/ the depth of knowledge are withdrawn from the community w/o sufficient replacements being made.

                We know many areas have suffered greatly b/c of this recently and long ago. Recently many long term open source contributors are being sucked up out of the community due to this. Creating an even larger void.

                P This user is from outside of this forum
                P This user is from outside of this forum
                pinskia@hachyderm.io
                wrote last edited by
                #7

                @shafik @chandlerc @meowray this is exactly what has happened in gcc around 10 years ago. If anything learning from gcc mishaps with respect to reviewers and maintainers of areas is definitely to look into. Many folks from gcc moved on and never back filled. Things are finally getting back filled but it will take another 2/3 get to the time when gcc had more than 2/3 main people reviewing patches especially considering these 2/3 people are over half way into their career.

                The one thing which will help is training replacements early on. Replacements on reviewers and code which is not getting "some love".

                For gcc I have started to push more to remove code that is just not working and not maintained. As one way of solving part of this issue. The other is getting folks internally to do more review of patches. Even if it is just saying this seems correct is ways a good thing.

                1 Reply Last reply
                0
                • boomanaiden154@hachyderm.ioB boomanaiden154@hachyderm.io

                  @shafik @chandlerc @meowray

                  Yep. The incentive structures here are not naturally aligned to building a healthy open source community. Doing reviews for patches that might not provide immediate (or any) benefit is time spent that is harder to justify than time spent working on immediately useful features. But it is immensely important to the health of the community which is the reason any of this can happen in the first place.

                  Some places seem to realize this and others do not. I hope more come around as time goes on.

                  shafik@hachyderm.ioS This user is from outside of this forum
                  shafik@hachyderm.ioS This user is from outside of this forum
                  shafik@hachyderm.io
                  wrote last edited by
                  #8

                  @boomanaiden154 @chandlerc @meowray

                  I really do try and spend a lot of time on code review b/c it pays off a lot long term in the health of the project.

                  It is good b/c you start to see that after a while the comments you make don't fall on deaf ears and folks do listen and do get better.

                  chandlerc@hachyderm.ioC 1 Reply Last reply
                  0
                  • boomanaiden154@hachyderm.ioB boomanaiden154@hachyderm.io

                    @shafik @chandlerc @meowray

                    Yep. The incentive structures here are not naturally aligned to building a healthy open source community. Doing reviews for patches that might not provide immediate (or any) benefit is time spent that is harder to justify than time spent working on immediately useful features. But it is immensely important to the health of the community which is the reason any of this can happen in the first place.

                    Some places seem to realize this and others do not. I hope more come around as time goes on.

                    meowray@hachyderm.ioM This user is from outside of this forum
                    meowray@hachyderm.ioM This user is from outside of this forum
                    meowray@hachyderm.io
                    wrote last edited by
                    #9

                    @boomanaiden154 @shafik @chandlerc Yeah, Chandler is describing what *should* happen; these comments describe what *does* happen due to human nature (generative work is more rewarding than evaluative work), institutional incentives (companies fund engineers to land features, not review other companies' patches), and asymmetry of LLM-aided development (agents can generate patches but cannot replace the trust and judgment required for review).
                    The trilemma describes where we end up if we don't actively fight it.

                    chandlerc@hachyderm.ioC 1 Reply Last reply
                    0
                    • shafik@hachyderm.ioS shafik@hachyderm.io

                      @chandlerc @meowray

                      You may be technically correct here but the reality is that without sustained long term commitment from large organizations maintaining all three is very challenging.

                      It also means many long time contributors w/ the depth of knowledge are withdrawn from the community w/o sufficient replacements being made.

                      We know many areas have suffered greatly b/c of this recently and long ago. Recently many long term open source contributors are being sucked up out of the community due to this. Creating an even larger void.

                      chandlerc@hachyderm.ioC This user is from outside of this forum
                      chandlerc@hachyderm.ioC This user is from outside of this forum
                      chandlerc@hachyderm.io
                      wrote last edited by
                      #10

                      @shafik @meowray

                      I still think the key is the cultural prioritization of balanced contributions between new patches and review.

                      But I completely agree that sustaining that cultural prioritization without _significant_ funding so that people are actually paid for these balanced activities is almost impossible for large projects. And this kind of funding means either sustained long-term commitment from large corps, or something like Igalia, Lenaro, RedHat, or a dedicated foundation.

                      However, I think LLVM _has_ this kind of long-term large-org commitment. But it has struggled to channel that commitment to some of the needed code review and maintenance needs of the project. We're actually making good progress on changing this within G, although still lots to do. But I want to point out that this won't work with one or a few organizations -- the entire community needs to embrace the culture shift, _and push large orgs to uphold it_, for it to succeed.

                      1 Reply Last reply
                      0
                      • boomanaiden154@hachyderm.ioB boomanaiden154@hachyderm.io

                        @shafik @chandlerc @meowray

                        Yep. The incentive structures here are not naturally aligned to building a healthy open source community. Doing reviews for patches that might not provide immediate (or any) benefit is time spent that is harder to justify than time spent working on immediately useful features. But it is immensely important to the health of the community which is the reason any of this can happen in the first place.

                        Some places seem to realize this and others do not. I hope more come around as time goes on.

                        chandlerc@hachyderm.ioC This user is from outside of this forum
                        chandlerc@hachyderm.ioC This user is from outside of this forum
                        chandlerc@hachyderm.io
                        wrote last edited by
                        #11

                        @boomanaiden154 @shafik @meowray

                        FWIW, I actually don't think the incentives are that hard to align here.... I think the problem is that we've gotten burned by a tempting but implausible fiction of prioritizing patches over review / maintenance / building community. I'm being completely genuine when I say this is tempting, and incredibly hard to resist. But I think that culture is the dominant factor here, and with the right culture, the community can establish and uphold the necessary incentives.

                        1 Reply Last reply
                        0
                        • shafik@hachyderm.ioS shafik@hachyderm.io

                          @boomanaiden154 @chandlerc @meowray

                          I really do try and spend a lot of time on code review b/c it pays off a lot long term in the health of the project.

                          It is good b/c you start to see that after a while the comments you make don't fall on deaf ears and folks do listen and do get better.

                          chandlerc@hachyderm.ioC This user is from outside of this forum
                          chandlerc@hachyderm.ioC This user is from outside of this forum
                          chandlerc@hachyderm.io
                          wrote last edited by
                          #12

                          @shafik @boomanaiden154 @meowray

                          I think the key thing I would emphasize is to attack the meta-problem rather than your personal ratio here -- how do we create a culture and community that _consistently_ pushes for a healthy and sustainable balance. That's where I think individuals can make the most difference here.

                          1 Reply Last reply
                          0
                          • meowray@hachyderm.ioM meowray@hachyderm.io

                            @boomanaiden154 @shafik @chandlerc Yeah, Chandler is describing what *should* happen; these comments describe what *does* happen due to human nature (generative work is more rewarding than evaluative work), institutional incentives (companies fund engineers to land features, not review other companies' patches), and asymmetry of LLM-aided development (agents can generate patches but cannot replace the trust and judgment required for review).
                            The trilemma describes where we end up if we don't actively fight it.

                            chandlerc@hachyderm.ioC This user is from outside of this forum
                            chandlerc@hachyderm.ioC This user is from outside of this forum
                            chandlerc@hachyderm.io
                            wrote last edited by
                            #13

                            @meowray @boomanaiden154 @shafik

                            I'm not just describing something that is purely hypothetical or theoretical. There are open source projects that are (IMO) doing are fairly effective at striving towards this. For all of its failings, the Linux Kernel IMO does a decent job of this specific thing. I think both Rust and GCC are doing a bit better than LLVM with this specific aspect recently, although both still struggle somewhat. I think various parts of Go (but maybe not all of it) do pretty well here, etc.

                            I also think a (much) bigger challenge than institutional incentives is the culture established by the leaders in the community: do they prioritize this? Do they do so in way that shows empathy, and makes people _want_ to join them in the effort? Do they bulid buy in with others in the community so that the _entire_ culture shifts in this direction?

                            boomanaiden154@hachyderm.ioB 1 Reply Last reply
                            0
                            • boomanaiden154@hachyderm.ioB boomanaiden154@hachyderm.io

                              @chandlerc @meowray

                              I think in addition to code review not being prioritized by the community as much as it should (which seems to happen for a variety of reasons), LLVM also isn’t as good as it could be at turning contributors into maintainers interested in doing review. I don’t have numbers, but it seems like LLVM gets a lot of drive by patches where people might land a couple and then disappear. Even through structured programs like GSoC, it seems like only a couple participants in each cohort go on to stay involved in the project. A lack of people walking the path to maintainership I think leaves existing maintainers significantly more burdened.

                              I don’t have a good idea of why we as a community aren’t better at it. I think part of might be that people don’t feel empowered to review code. I wonder if a more structured review/code owner system might help with that. I also think part of it is that developing the expertise to properly review patches for a project like LLVM is an immense effort. I’ve recently been trying to pick up the number of reviews I’m doing to try and help with the review bandwidth problem, and I think the one main thing I’ve learned is how little I understand huge swaths of the main code base I’ve been working on for the past three years.

                              chandlerc@hachyderm.ioC This user is from outside of this forum
                              chandlerc@hachyderm.ioC This user is from outside of this forum
                              chandlerc@hachyderm.io
                              wrote last edited by
                              #14

                              @boomanaiden154 @meowray

                              > LLVM also isn’t as good as it could be at turning contributors into maintainers interested in doing review

                              I strongly agree FWIW.

                              I think there are a lot of different factors here, but one I want to highlight: do folks in the community work to recognize, reward, and make it desirable to do reviews? Do they support the reviewers and create a culture than doesn't just indicate "you need to" but causes people to _want_ to participate in code review? To, literally, find joy and fun in it?

                              Speaking just for myself, this (in various forms, and combined with depression and other mental health issues) is what burned me out, and caused me to pull back from the LLVM community many years ago.

                              We've been trying to find a way to healthfully sustain this kind of review culture in Carbon and are having some good success. And as I'm doing better on the mental health front in the last year, I'm also trying to come back to being a bit more involved in LLVM. In large part, my goal and what I want is to figure out how to help nudge the culture in this direction.

                              girgias@phpc.socialG 1 Reply Last reply
                              0
                              • chandlerc@hachyderm.ioC chandlerc@hachyderm.io

                                @boomanaiden154 @meowray

                                > LLVM also isn’t as good as it could be at turning contributors into maintainers interested in doing review

                                I strongly agree FWIW.

                                I think there are a lot of different factors here, but one I want to highlight: do folks in the community work to recognize, reward, and make it desirable to do reviews? Do they support the reviewers and create a culture than doesn't just indicate "you need to" but causes people to _want_ to participate in code review? To, literally, find joy and fun in it?

                                Speaking just for myself, this (in various forms, and combined with depression and other mental health issues) is what burned me out, and caused me to pull back from the LLVM community many years ago.

                                We've been trying to find a way to healthfully sustain this kind of review culture in Carbon and are having some good success. And as I'm doing better on the mental health front in the last year, I'm also trying to come back to being a bit more involved in LLVM. In large part, my goal and what I want is to figure out how to help nudge the culture in this direction.

                                girgias@phpc.socialG This user is from outside of this forum
                                girgias@phpc.socialG This user is from outside of this forum
                                girgias@phpc.social
                                wrote last edited by
                                #15

                                @chandlerc @boomanaiden154 @meowray I am honestly wondering what you're doing to improve the culture. Although PHP has gotten somewhat better recently at having more people review PRs and code it did feel like 2y ago I was the "default" reviewer for anything not touching the VM or optimizer. And I don't really know if we did anything to encourage this (other than the foundation hiring a few more people).

                                1 Reply Last reply
                                0
                                • chandlerc@hachyderm.ioC chandlerc@hachyderm.io

                                  @meowray @boomanaiden154 @shafik

                                  I'm not just describing something that is purely hypothetical or theoretical. There are open source projects that are (IMO) doing are fairly effective at striving towards this. For all of its failings, the Linux Kernel IMO does a decent job of this specific thing. I think both Rust and GCC are doing a bit better than LLVM with this specific aspect recently, although both still struggle somewhat. I think various parts of Go (but maybe not all of it) do pretty well here, etc.

                                  I also think a (much) bigger challenge than institutional incentives is the culture established by the leaders in the community: do they prioritize this? Do they do so in way that shows empathy, and makes people _want_ to join them in the effort? Do they bulid buy in with others in the community so that the _entire_ culture shifts in this direction?

                                  boomanaiden154@hachyderm.ioB This user is from outside of this forum
                                  boomanaiden154@hachyderm.ioB This user is from outside of this forum
                                  boomanaiden154@hachyderm.io
                                  wrote last edited by
                                  #16

                                  @chandlerc @meowray @shafik

                                  I think our leaders do a good job of this. Most of our lead maintainers review much more code than they commit themselves. I've also found them to be reasonably encouraging towards new reviewers and care deeply about the experience for existing reviewers. I for one have started trying to do more reviews because of a call from a lead maintainer.

                                  I'm not sure one can say that they have been effective at shifting an entire project's culture towards doing an effective number of reviews given that is not happening. But they certainly seem to be putting quite a bit of effort into trying.

                                  1 Reply Last reply
                                  0
                                  • boomanaiden154@hachyderm.ioB This user is from outside of this forum
                                    boomanaiden154@hachyderm.ioB This user is from outside of this forum
                                    boomanaiden154@hachyderm.io
                                    wrote last edited by
                                    #17

                                    @AaronBallman @meowray @chandlerc

                                    It's sad to hear that AI usage is having such an impact on reviewers. Ideally the AI policy lets reviewers push back on that and label contributions as extractive, but I think there are many AI contributions where they take much more time to review while not technically being "extractive".

                                    Echoing other reviewer's sentiments, my experience reviewing PRs produced with AI assistance has been frustrating. There are usually many more rounds of review and you can't trust that the contributor actually thought about all of the changes that they made.

                                    I still think AI can be useful in some limited contexts. But only people with a deep knowledge of the code base are going to be effective at using it for individual PRs, which are generally not the people actually using it.

                                    I don't know if there's a good solution to this yet. I think tools like vouch are interesting, but that feels like too big of a compromise on fostering a welcoming community for LLVM specifically.

                                    malwareminigun@infosec.exchangeM 1 Reply Last reply
                                    0
                                    • chandlerc@hachyderm.ioC chandlerc@hachyderm.io

                                      @meowray FWIW, strongly disagree here.

                                      I think it is entirely possible to have quality, velocity, and open contribution.

                                      I'm not saying there isn't a tradeoff, but I think the above three can be preserved sufficiently.

                                      For example, in LLVM, I think the bigger challenge than quality is that people view "contribution" as _much_ more about "sending a patch" and not "reviewing a patch. As a consequence, the project has lost community and cultural prioritization of code review as an active and necessary part of contribution.

                                      Also, "open contribution" doesn't mean you _have_ to accept contributions. I think a project can still have meaningfully open contribution while insisting contributors balance their contributions between patches and review, and where contributions that are extractive are rejected until the contributor figures out how to make them constructive.

                                      IMO, criteria for sustaining both quality & velocity in OSS:
                                      - Strong expectation of _total_ community code review in balance to _total_ new patches -- this means that long-time contributors (maintainers) must do _more_ review than new patches.
                                      - Strong expectation of patches from new contributors rapidly rising to the quality bar where they are efficient to review and non-extractive.
                                      - Strong testing culture that ensures a large fraction of quality is mechanically ensured
                                      - Excellent infrastructure use to provide efficient review and CI so tests are effective

                                      I think LLVM struggles with the first and last of these. The last is improving recently though!

                                      boomanaiden154@hachyderm.ioB This user is from outside of this forum
                                      boomanaiden154@hachyderm.ioB This user is from outside of this forum
                                      boomanaiden154@hachyderm.io
                                      wrote last edited by
                                      #18

                                      @chandlerc @meowray

                                      Also, to your last point about LLVM struggling a bit with effective infrastructure use, it sounds like you have some specific ideas in mind that would be high value?

                                      chandlerc@hachyderm.ioC 1 Reply Last reply
                                      0
                                      • boomanaiden154@hachyderm.ioB boomanaiden154@hachyderm.io

                                        @chandlerc @meowray

                                        Also, to your last point about LLVM struggling a bit with effective infrastructure use, it sounds like you have some specific ideas in mind that would be high value?

                                        chandlerc@hachyderm.ioC This user is from outside of this forum
                                        chandlerc@hachyderm.ioC This user is from outside of this forum
                                        chandlerc@hachyderm.io
                                        wrote last edited by
                                        #19

                                        @boomanaiden154 @meowray

                                        PRs, and really leveraging all the tools GitHub gives you for PRs are another really useful bit of infrastructure.

                                        I think the new test setup on GitHub is already a _huge_ step here -- you need reliable CI that actually matches the expected support surface.

                                        Using tools like pre-commit to have CI-style checking of lots of "non test" things (formatting, etc), and to do this not in a one-off (the current formatting bots) but as a systematic thing that can easily be extended again and again to automate every step possible.

                                        Expand testing and CI to cover much more integration testing and system testing so that more things can be caught early and automatically. FWIW, systems like Bazel really pay dividends here... =/

                                        And once you have the CI automating as much as you can, tie it into a merge queue with no direct commit.

                                        1 Reply Last reply
                                        0
                                        • boomanaiden154@hachyderm.ioB boomanaiden154@hachyderm.io

                                          @AaronBallman @meowray @chandlerc

                                          It's sad to hear that AI usage is having such an impact on reviewers. Ideally the AI policy lets reviewers push back on that and label contributions as extractive, but I think there are many AI contributions where they take much more time to review while not technically being "extractive".

                                          Echoing other reviewer's sentiments, my experience reviewing PRs produced with AI assistance has been frustrating. There are usually many more rounds of review and you can't trust that the contributor actually thought about all of the changes that they made.

                                          I still think AI can be useful in some limited contexts. But only people with a deep knowledge of the code base are going to be effective at using it for individual PRs, which are generally not the people actually using it.

                                          I don't know if there's a good solution to this yet. I think tools like vouch are interesting, but that feels like too big of a compromise on fostering a welcoming community for LLVM specifically.

                                          malwareminigun@infosec.exchangeM This user is from outside of this forum
                                          malwareminigun@infosec.exchangeM This user is from outside of this forum
                                          malwareminigun@infosec.exchange
                                          wrote last edited by
                                          #20

                                          @boomanaiden154 @AaronBallman @meowray @chandlerc AI written code reviews where a human has clearly pasted the replies into their chat bot which has summarily ignored what we asked for and just updated the review with what the bot did are getting *old* very fast.

                                          The number of people who don't know what they're doing blindly trusting the LLM trying to make sorting it out the maintainers' problem are growing; and I'm seeing that in vcpkg where most of our contributions aren't expected to be experts. I would expect in something complex like a compiler things to be worse, though at least LLVM has a robust test infrastructure including researchy projects like Alive

                                          1 Reply Last reply
                                          1
                                          0
                                          • R relay@relay.infosec.exchange shared this topic
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups