Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit

There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit

Scheduled Pinned Locked Moved Uncategorized
75 Posts 38 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

    There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit

    Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.

    So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.

    Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software

    You may now resume doom scrolling. Thank you

    theeclecticdyslexic@mstdn.socialT This user is from outside of this forum
    theeclecticdyslexic@mstdn.socialT This user is from outside of this forum
    theeclecticdyslexic@mstdn.social
    wrote last edited by
    #30

    @jenniferplusplus The thing that interests me the most about this is what specifically happened with Greg KH in that one article where he claimed it found 40 real vulnerabilities in a report containing 60?

    I am willing to bet it isn't as simple as is presented. If it is, then I want proof that they aren't targeting special attention at certain users. I think you could do a lot, auditing the kernel and waiting for Greg to ask. Especially if some devs are making contributions aided by claude...

    1 Reply Last reply
    0
    • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

      There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit

      Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.

      So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.

      Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software

      You may now resume doom scrolling. Thank you

      fancysandwiches@neuromatch.socialF This user is from outside of this forum
      fancysandwiches@neuromatch.socialF This user is from outside of this forum
      fancysandwiches@neuromatch.social
      wrote last edited by
      #31

      @jenniferplusplus Open AI made similar claims about their model being so good it was dangerous and they weren't going to release it. In 2019. https://techcrunch.com/2019/02/17/openai-text-generator-dangerous/

      jenniferplusplus@hachyderm.ioJ 1 Reply Last reply
      0
      • chrisp@cyberplace.socialC chrisp@cyberplace.social

        @jenniferplusplus "Our new model is too dangerous for the public, we couldn't possibly release it! Anyway, you can subscribe to it for $150 a month."

        mxey@hachyderm.ioM This user is from outside of this forum
        mxey@hachyderm.ioM This user is from outside of this forum
        mxey@hachyderm.io
        wrote last edited by
        #32

        @chrisp no, you cannot subscribe to it because it is NOT released yet.

        1 Reply Last reply
        0
        • mirth@mastodon.sdf.orgM mirth@mastodon.sdf.org

          @budududuroiu @jenniferplusplus I wouldn't give Anthropic's motives a lot of credit here but LLMs do make bug hunting much easier.

          jedimb@mastodon.gamedev.placeJ This user is from outside of this forum
          jedimb@mastodon.gamedev.placeJ This user is from outside of this forum
          jedimb@mastodon.gamedev.place
          wrote last edited by
          #33

          @mirth @budududuroiu @jenniferplusplus Tell that to all the open source repo maintainers who get spammed with fake, nonsensical bug reports generated by AI?

          budududuroiu@hachyderm.ioB 1 Reply Last reply
          0
          • jedimb@mastodon.gamedev.placeJ jedimb@mastodon.gamedev.place

            @mirth @budududuroiu @jenniferplusplus Tell that to all the open source repo maintainers who get spammed with fake, nonsensical bug reports generated by AI?

            budududuroiu@hachyderm.ioB This user is from outside of this forum
            budududuroiu@hachyderm.ioB This user is from outside of this forum
            budududuroiu@hachyderm.io
            wrote last edited by
            #34

            @jedimb They can... close submissions? Many projects already have. It's like a 2 second change.

            @mirth @jenniferplusplus

            jedimb@mastodon.gamedev.placeJ 1 Reply Last reply
            0
            • budududuroiu@hachyderm.ioB budududuroiu@hachyderm.io

              @jedimb They can... close submissions? Many projects already have. It's like a 2 second change.

              @mirth @jenniferplusplus

              jedimb@mastodon.gamedev.placeJ This user is from outside of this forum
              jedimb@mastodon.gamedev.placeJ This user is from outside of this forum
              jedimb@mastodon.gamedev.place
              wrote last edited by
              #35

              @budududuroiu @mirth @jenniferplusplus Making bug fixing more difficult because legitimate reports get blocked alongside the noise.

              budududuroiu@hachyderm.ioB 1 Reply Last reply
              0
              • jedimb@mastodon.gamedev.placeJ jedimb@mastodon.gamedev.place

                @budududuroiu @mirth @jenniferplusplus Making bug fixing more difficult because legitimate reports get blocked alongside the noise.

                budududuroiu@hachyderm.ioB This user is from outside of this forum
                budududuroiu@hachyderm.ioB This user is from outside of this forum
                budududuroiu@hachyderm.io
                wrote last edited by
                #36

                @jedimb and the alternative is?

                @mirth @jenniferplusplus

                jedimb@mastodon.gamedev.placeJ 1 Reply Last reply
                0
                • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                  There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit

                  Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.

                  So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.

                  Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software

                  You may now resume doom scrolling. Thank you

                  pilchard@ravenation.clubP This user is from outside of this forum
                  pilchard@ravenation.clubP This user is from outside of this forum
                  pilchard@ravenation.club
                  wrote last edited by
                  #37

                  @jenniferplusplus Big AI is making all AI look bad.

                  1 Reply Last reply
                  0
                  • budududuroiu@hachyderm.ioB budududuroiu@hachyderm.io

                    @jedimb and the alternative is?

                    @mirth @jenniferplusplus

                    jedimb@mastodon.gamedev.placeJ This user is from outside of this forum
                    jedimb@mastodon.gamedev.placeJ This user is from outside of this forum
                    jedimb@mastodon.gamedev.place
                    wrote last edited by
                    #38

                    @budududuroiu @mirth @jenniferplusplus What we had just a few years ago.

                    budududuroiu@hachyderm.ioB 1 Reply Last reply
                    0
                    • jedimb@mastodon.gamedev.placeJ jedimb@mastodon.gamedev.place

                      @budududuroiu @mirth @jenniferplusplus What we had just a few years ago.

                      budududuroiu@hachyderm.ioB This user is from outside of this forum
                      budududuroiu@hachyderm.ioB This user is from outside of this forum
                      budududuroiu@hachyderm.io
                      wrote last edited by
                      #39

                      @jedimb yeah well that ship has sailed long ago.

                      @mirth @jenniferplusplus

                      jedimb@mastodon.gamedev.placeJ 1 Reply Last reply
                      0
                      • budududuroiu@hachyderm.ioB budududuroiu@hachyderm.io

                        @jedimb yeah well that ship has sailed long ago.

                        @mirth @jenniferplusplus

                        jedimb@mastodon.gamedev.placeJ This user is from outside of this forum
                        jedimb@mastodon.gamedev.placeJ This user is from outside of this forum
                        jedimb@mastodon.gamedev.place
                        wrote last edited by
                        #40

                        @budududuroiu @mirth @jenniferplusplus "The plague is here. Let's just live with it" does seem to be a recurring sentiment, but it doesn't change that it's a plague.

                        budududuroiu@hachyderm.ioB 1 Reply Last reply
                        0
                        • jedimb@mastodon.gamedev.placeJ jedimb@mastodon.gamedev.place

                          @budududuroiu @mirth @jenniferplusplus "The plague is here. Let's just live with it" does seem to be a recurring sentiment, but it doesn't change that it's a plague.

                          budududuroiu@hachyderm.ioB This user is from outside of this forum
                          budududuroiu@hachyderm.ioB This user is from outside of this forum
                          budududuroiu@hachyderm.io
                          wrote last edited by
                          #41

                          @jedimb norms are downstream from power. Current power balance is shifted towards frontier labs and hyperscalers, norms around personal computing (RAM prices) and open source software (AI slop floods) are dictated by them.

                          Moralising AI use with no power to back it up is useless, gatekeeping is power because it says "want to contribute to this project, abide by our rules"

                          Link Preview Image
                          The case for gatekeeping, or: why medieval guilds had it figured out

                          Every open source maintainer I've talked to in the last six months has the same complaint: the absolute flood of mass-produced, AI-generated, mass-submitted slop requests have turned their repositories into a slush pile. The contributions look like contributions, they have commit messages, they reference issues and they follow templates etc.

                          favicon

                          Westenberg. (www.joanwestenberg.com)

                          @mirth @jenniferplusplus

                          jedimb@mastodon.gamedev.placeJ 1 Reply Last reply
                          0
                          • budududuroiu@hachyderm.ioB budududuroiu@hachyderm.io

                            @dngrs Well, you're partly correct, partly wrong. Yes, pretrained transformers are, like all generative models, definitionally modelling a joint probability distribution, and autoregressively generating from that joint probability distribution.

                            Those are the models you're referring to as autocomplete tools, hence why you had to use `[MASK]` with early transformers like BERT to get them to complete the "most probable token".

                            Regardless, it doesn't matter what Anthropic did, if it allows for a massive reduction in cost of finding zero days, it's a problem. It doesn't have to be revolutionary, it doesn't have to be superintelligence, AGI, whatever woo-hoo flashy marketing terms. If a reduction in cost of computing protein folding happens, i.e. OpenFold implementation of AlphaFold, that wouldn't be revolutionary, but would still be dangerous, since you now potentially have lone actors being able to make prions at home (I'm using this as an absurd, but probable case).

                            @jenniferplusplus

                            dngrs@chaos.socialD This user is from outside of this forum
                            dngrs@chaos.socialD This user is from outside of this forum
                            dngrs@chaos.social
                            wrote last edited by
                            #42

                            @budududuroiu @jenniferplusplus it's funny you bring up AlphaFold because that also has been way overhyped, according to people working in the field (I don't have links to individual statements anymore sadly, been a few years but the Wikipedia page also mentions e.g. AF not really understanding folding). Anyway: as long as there is no concrete data regarding severe CVE increase with a causal link to newer LLMs (which again are still LLMs that do not understand facts) I'll keep holding my breath.

                            budududuroiu@hachyderm.ioB 1 Reply Last reply
                            0
                            • budududuroiu@hachyderm.ioB budududuroiu@hachyderm.io

                              @jedimb norms are downstream from power. Current power balance is shifted towards frontier labs and hyperscalers, norms around personal computing (RAM prices) and open source software (AI slop floods) are dictated by them.

                              Moralising AI use with no power to back it up is useless, gatekeeping is power because it says "want to contribute to this project, abide by our rules"

                              Link Preview Image
                              The case for gatekeeping, or: why medieval guilds had it figured out

                              Every open source maintainer I've talked to in the last six months has the same complaint: the absolute flood of mass-produced, AI-generated, mass-submitted slop requests have turned their repositories into a slush pile. The contributions look like contributions, they have commit messages, they reference issues and they follow templates etc.

                              favicon

                              Westenberg. (www.joanwestenberg.com)

                              @mirth @jenniferplusplus

                              jedimb@mastodon.gamedev.placeJ This user is from outside of this forum
                              jedimb@mastodon.gamedev.placeJ This user is from outside of this forum
                              jedimb@mastodon.gamedev.place
                              wrote last edited by
                              #43

                              @budududuroiu @mirth @jenniferplusplus Goal post moved into a different dimension, I see.

                              1 Reply Last reply
                              0
                              • dngrs@chaos.socialD dngrs@chaos.social

                                @budududuroiu @jenniferplusplus it's funny you bring up AlphaFold because that also has been way overhyped, according to people working in the field (I don't have links to individual statements anymore sadly, been a few years but the Wikipedia page also mentions e.g. AF not really understanding folding). Anyway: as long as there is no concrete data regarding severe CVE increase with a causal link to newer LLMs (which again are still LLMs that do not understand facts) I'll keep holding my breath.

                                budududuroiu@hachyderm.ioB This user is from outside of this forum
                                budududuroiu@hachyderm.ioB This user is from outside of this forum
                                budududuroiu@hachyderm.io
                                wrote last edited by
                                #44

                                @dngrs @jenniferplusplus I'm sorry, I know thinking conceptually isn't easy for everyone, I tried using AlphaFold because some people have an easier time when presented with examples.

                                Why would there be an increase in CVEs? If I was an actor with nation-state levels of access to compute, why would I waste all that compute on zero days, only to then publish CVEs about them?

                                Even the most AI skeptic maintainers start to admit that LLMs are getting good at finding bugs. I understand cynicism is seen as cool nowadays but I think it's intellectually lazy

                                daniel:// stenberg:// (@bagder@mastodon.social)

                                I ran a quick git log grep just now. Over the last ~6 months or so, we have fixed over 200 bugs in #curl found with "AI tools".

                                favicon

                                Mastodon (mastodon.social)

                                dngrs@chaos.socialD jenniferplusplus@hachyderm.ioJ 2 Replies Last reply
                                0
                                • budududuroiu@hachyderm.ioB budududuroiu@hachyderm.io

                                  @dngrs @jenniferplusplus I'm sorry, I know thinking conceptually isn't easy for everyone, I tried using AlphaFold because some people have an easier time when presented with examples.

                                  Why would there be an increase in CVEs? If I was an actor with nation-state levels of access to compute, why would I waste all that compute on zero days, only to then publish CVEs about them?

                                  Even the most AI skeptic maintainers start to admit that LLMs are getting good at finding bugs. I understand cynicism is seen as cool nowadays but I think it's intellectually lazy

                                  daniel:// stenberg:// (@bagder@mastodon.social)

                                  I ran a quick git log grep just now. Over the last ~6 months or so, we have fixed over 200 bugs in #curl found with "AI tools".

                                  favicon

                                  Mastodon (mastodon.social)

                                  dngrs@chaos.socialD This user is from outside of this forum
                                  dngrs@chaos.socialD This user is from outside of this forum
                                  dngrs@chaos.social
                                  wrote last edited by
                                  #45

                                  @budududuroiu holy condescension Batman lol, no thank you

                                  1 Reply Last reply
                                  0
                                  • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                                    There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit

                                    Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.

                                    So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.

                                    Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software

                                    You may now resume doom scrolling. Thank you

                                    claudius@darmstadt.socialC This user is from outside of this forum
                                    claudius@darmstadt.socialC This user is from outside of this forum
                                    claudius@darmstadt.social
                                    wrote last edited by
                                    #46

                                    @jenniferplusplus 37th time's the charm! This time *for real*.

                                    1 Reply Last reply
                                    0
                                    • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                                      There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit

                                      Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.

                                      So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.

                                      Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software

                                      You may now resume doom scrolling. Thank you

                                      doggo@plush.cityD This user is from outside of this forum
                                      doggo@plush.cityD This user is from outside of this forum
                                      doggo@plush.city
                                      wrote last edited by
                                      #47

                                      @jenniferplusplus The issue is that big enough corpos don't care about code quality anymore, and they don't care about vulnerabilities being there for months (years sometimes) or leaks. Nobody care about these anymore.. they want results fast to sell quick and move on.

                                      1 Reply Last reply
                                      0
                                      • fancysandwiches@neuromatch.socialF fancysandwiches@neuromatch.social

                                        @jenniferplusplus Open AI made similar claims about their model being so good it was dangerous and they weren't going to release it. In 2019. https://techcrunch.com/2019/02/17/openai-text-generator-dangerous/

                                        jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
                                        jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
                                        jenniferplusplus@hachyderm.io
                                        wrote last edited by
                                        #48

                                        @fancysandwiches oh wow, a headline that describes these things as text generators.

                                        How far we've fallen

                                        1 Reply Last reply
                                        0
                                        • budududuroiu@hachyderm.ioB budududuroiu@hachyderm.io

                                          @dngrs @jenniferplusplus I'm sorry, I know thinking conceptually isn't easy for everyone, I tried using AlphaFold because some people have an easier time when presented with examples.

                                          Why would there be an increase in CVEs? If I was an actor with nation-state levels of access to compute, why would I waste all that compute on zero days, only to then publish CVEs about them?

                                          Even the most AI skeptic maintainers start to admit that LLMs are getting good at finding bugs. I understand cynicism is seen as cool nowadays but I think it's intellectually lazy

                                          daniel:// stenberg:// (@bagder@mastodon.social)

                                          I ran a quick git log grep just now. Over the last ~6 months or so, we have fixed over 200 bugs in #curl found with "AI tools".

                                          favicon

                                          Mastodon (mastodon.social)

                                          jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
                                          jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
                                          jenniferplusplus@hachyderm.io
                                          wrote last edited by
                                          #49

                                          @budududuroiu @dngrs you may as well stop, you're not going to convince me to trust them. Only anthropic can do that, because they have truly earned my distrust.

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups