Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit

There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit

Scheduled Pinned Locked Moved Uncategorized
75 Posts 38 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • budududuroiu@hachyderm.ioB budududuroiu@hachyderm.io

    @jedimb norms are downstream from power. Current power balance is shifted towards frontier labs and hyperscalers, norms around personal computing (RAM prices) and open source software (AI slop floods) are dictated by them.

    Moralising AI use with no power to back it up is useless, gatekeeping is power because it says "want to contribute to this project, abide by our rules"

    Link Preview Image
    The case for gatekeeping, or: why medieval guilds had it figured out

    Every open source maintainer I've talked to in the last six months has the same complaint: the absolute flood of mass-produced, AI-generated, mass-submitted slop requests have turned their repositories into a slush pile. The contributions look like contributions, they have commit messages, they reference issues and they follow templates etc.

    favicon

    Westenberg. (www.joanwestenberg.com)

    @mirth @jenniferplusplus

    jedimb@mastodon.gamedev.placeJ This user is from outside of this forum
    jedimb@mastodon.gamedev.placeJ This user is from outside of this forum
    jedimb@mastodon.gamedev.place
    wrote last edited by
    #43

    @budududuroiu @mirth @jenniferplusplus Goal post moved into a different dimension, I see.

    1 Reply Last reply
    0
    • dngrs@chaos.socialD dngrs@chaos.social

      @budududuroiu @jenniferplusplus it's funny you bring up AlphaFold because that also has been way overhyped, according to people working in the field (I don't have links to individual statements anymore sadly, been a few years but the Wikipedia page also mentions e.g. AF not really understanding folding). Anyway: as long as there is no concrete data regarding severe CVE increase with a causal link to newer LLMs (which again are still LLMs that do not understand facts) I'll keep holding my breath.

      budududuroiu@hachyderm.ioB This user is from outside of this forum
      budududuroiu@hachyderm.ioB This user is from outside of this forum
      budududuroiu@hachyderm.io
      wrote last edited by
      #44

      @dngrs @jenniferplusplus I'm sorry, I know thinking conceptually isn't easy for everyone, I tried using AlphaFold because some people have an easier time when presented with examples.

      Why would there be an increase in CVEs? If I was an actor with nation-state levels of access to compute, why would I waste all that compute on zero days, only to then publish CVEs about them?

      Even the most AI skeptic maintainers start to admit that LLMs are getting good at finding bugs. I understand cynicism is seen as cool nowadays but I think it's intellectually lazy

      daniel:// stenberg:// (@bagder@mastodon.social)

      I ran a quick git log grep just now. Over the last ~6 months or so, we have fixed over 200 bugs in #curl found with "AI tools".

      favicon

      Mastodon (mastodon.social)

      dngrs@chaos.socialD jenniferplusplus@hachyderm.ioJ 2 Replies Last reply
      0
      • budududuroiu@hachyderm.ioB budududuroiu@hachyderm.io

        @dngrs @jenniferplusplus I'm sorry, I know thinking conceptually isn't easy for everyone, I tried using AlphaFold because some people have an easier time when presented with examples.

        Why would there be an increase in CVEs? If I was an actor with nation-state levels of access to compute, why would I waste all that compute on zero days, only to then publish CVEs about them?

        Even the most AI skeptic maintainers start to admit that LLMs are getting good at finding bugs. I understand cynicism is seen as cool nowadays but I think it's intellectually lazy

        daniel:// stenberg:// (@bagder@mastodon.social)

        I ran a quick git log grep just now. Over the last ~6 months or so, we have fixed over 200 bugs in #curl found with "AI tools".

        favicon

        Mastodon (mastodon.social)

        dngrs@chaos.socialD This user is from outside of this forum
        dngrs@chaos.socialD This user is from outside of this forum
        dngrs@chaos.social
        wrote last edited by
        #45

        @budududuroiu holy condescension Batman lol, no thank you

        1 Reply Last reply
        0
        • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

          There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit

          Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.

          So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.

          Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software

          You may now resume doom scrolling. Thank you

          claudius@darmstadt.socialC This user is from outside of this forum
          claudius@darmstadt.socialC This user is from outside of this forum
          claudius@darmstadt.social
          wrote last edited by
          #46

          @jenniferplusplus 37th time's the charm! This time *for real*.

          1 Reply Last reply
          0
          • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

            There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit

            Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.

            So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.

            Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software

            You may now resume doom scrolling. Thank you

            doggo@plush.cityD This user is from outside of this forum
            doggo@plush.cityD This user is from outside of this forum
            doggo@plush.city
            wrote last edited by
            #47

            @jenniferplusplus The issue is that big enough corpos don't care about code quality anymore, and they don't care about vulnerabilities being there for months (years sometimes) or leaks. Nobody care about these anymore.. they want results fast to sell quick and move on.

            1 Reply Last reply
            0
            • fancysandwiches@neuromatch.socialF fancysandwiches@neuromatch.social

              @jenniferplusplus Open AI made similar claims about their model being so good it was dangerous and they weren't going to release it. In 2019. https://techcrunch.com/2019/02/17/openai-text-generator-dangerous/

              jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
              jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
              jenniferplusplus@hachyderm.io
              wrote last edited by
              #48

              @fancysandwiches oh wow, a headline that describes these things as text generators.

              How far we've fallen

              1 Reply Last reply
              0
              • budududuroiu@hachyderm.ioB budududuroiu@hachyderm.io

                @dngrs @jenniferplusplus I'm sorry, I know thinking conceptually isn't easy for everyone, I tried using AlphaFold because some people have an easier time when presented with examples.

                Why would there be an increase in CVEs? If I was an actor with nation-state levels of access to compute, why would I waste all that compute on zero days, only to then publish CVEs about them?

                Even the most AI skeptic maintainers start to admit that LLMs are getting good at finding bugs. I understand cynicism is seen as cool nowadays but I think it's intellectually lazy

                daniel:// stenberg:// (@bagder@mastodon.social)

                I ran a quick git log grep just now. Over the last ~6 months or so, we have fixed over 200 bugs in #curl found with "AI tools".

                favicon

                Mastodon (mastodon.social)

                jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
                jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
                jenniferplusplus@hachyderm.io
                wrote last edited by
                #49

                @budududuroiu @dngrs you may as well stop, you're not going to convince me to trust them. Only anthropic can do that, because they have truly earned my distrust.

                1 Reply Last reply
                0
                • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                  There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit

                  Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.

                  So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.

                  Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software

                  You may now resume doom scrolling. Thank you

                  alanxoc3@tilde.zoneA This user is from outside of this forum
                  alanxoc3@tilde.zoneA This user is from outside of this forum
                  alanxoc3@tilde.zone
                  wrote last edited by
                  #50

                  @jenniferplusplus Agree that it is mostly for marketing & investors.

                  But the article was technical enough, that I think there is an improvement here that no other model has. And if true, it would be great for vulnerability scanning/hardening in general (bad that attackers would have access to it though).

                  1 Reply Last reply
                  0
                  • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                    There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit

                    Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.

                    So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.

                    Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software

                    You may now resume doom scrolling. Thank you

                    sempf@infosec.exchangeS This user is from outside of this forum
                    sempf@infosec.exchangeS This user is from outside of this forum
                    sempf@infosec.exchange
                    wrote last edited by
                    #51

                    @jenniferplusplus Worth a follow for that post alone. Hi, I'm Bill. 👋🏻

                    1 Reply Last reply
                    0
                    • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                      There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit

                      Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.

                      So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.

                      Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software

                      You may now resume doom scrolling. Thank you

                      lediva@lediva.masto.hostL This user is from outside of this forum
                      lediva@lediva.masto.hostL This user is from outside of this forum
                      lediva@lediva.masto.host
                      wrote last edited by
                      #52

                      @jenniferplusplus "our magic machine found a 30 year old security vulnerability!"

                      OK, what's the CVE link? These companies never show proof besides saying "it totally did the thing, you guyzzz plz giv moar billionz"

                      1 Reply Last reply
                      0
                      • budududuroiu@hachyderm.ioB budududuroiu@hachyderm.io

                        @jenniferplusplus I seriously doubt this is smoke and mirrors, recent models have improved significantly for cybersec and the industry is noticing:

                        daniel:// stenberg:// (@bagder@mastodon.social)

                        The challenge with AI in open source security has transitioned from an AI slop tsunami into more of a ... plain security report tsunami. Less slop but lots of reports. Many of them really good. I'm spending hours per day on this now. It's intense.

                        favicon

                        Mastodon (mastodon.social)

                        Link Preview Image
                        Linux kernel czar says AI bug reports aren't slop anymore

                        Interview: Greg Kroah-Hartman can't explain the inflection point, but it's not slowing down or going away

                        favicon

                        (www.theregister.com)

                        The industry consensus seems to be that there's going to be a torrent of vulnerabilities being found in all sorts of software, and they're not prepared to handle the blast radius. It's not surprising that Anthropic wants to give a select few a head start to tackle them. It would be nice if their token fund was open to all OSS projects to apply.

                        I'm also pressing "X doubt" that you spend months coordinating between AWS, Apple, Microsoft, Google, and the Linux Foundation to organise this just because your tool's code leaked online.

                        sempf@infosec.exchangeS This user is from outside of this forum
                        sempf@infosec.exchangeS This user is from outside of this forum
                        sempf@infosec.exchange
                        wrote last edited by
                        #53

                        @budududuroiu @jenniferplusplus Let's talk about JavaScript. Have you ever looked at your browser's developer console? On any major website on the planet, there are 8 trillion errors in every one. Two-thirds of them are vulnerabilities, but none of them are exploitable or matter for anything at all. That is what is being found.

                        Those kinds of errors I've been reviewing, all the ones Daniel's been reviewing too, and I'm seeing it over and over. "Yes, okay, technically that is the buffer overrun, but it doesn't matter because you can't ever get to it!"

                        worik@mastodon.socialW 1 Reply Last reply
                        0
                        • jedbrown@hachyderm.ioJ jedbrown@hachyderm.io

                          @jenniferplusplus It's also important that to whatever extent this product actually works (I'm as skeptical as you are), it fundamentally preferences the attacker. The product has way too many false positives to run in CI, so the defender can only use it as part of an occasional audit. The attacker doesn't care about CI or development friction, and wins by finding one exploit in an entire stack, even if they have to wade through many false positives to find it.

                          mirth@mastodon.sdf.orgM This user is from outside of this forum
                          mirth@mastodon.sdf.orgM This user is from outside of this forum
                          mirth@mastodon.sdf.org
                          wrote last edited by
                          #54

                          @jedbrown @jenniferplusplus The asymmetry is the core thing that concerns me. I can say that empirically starting somewhere last year LLM-assisted bug hunting started to be effective. The false positives are avoidable but the cost of remediation has not gone down with the cost of exploits. This new model may make the situation worse but we're already in it.

                          1 Reply Last reply
                          0
                          • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                            A couple people seem very invested in me being wrong about this assessment. All I can say is that this would be the first time I have misclassified an AI claim as bullshit

                            jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
                            jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
                            jenniferplusplus@hachyderm.io
                            wrote last edited by
                            #55

                            So here's the other thing that bothers me about all this. Regardless of the eventual results, this thing they're doing is *incredibly* resource intensive. They routinely spend billions of dollars on training these models, and billions more on operating them. It's not simple to parse out what fraction of that is directly attributable to the massive scale vuln finder/fabricator. But for the sake of argument lets just pick a plausible number, and call it 50-100 million dollars.

                            What could we have gotten for 50-100 million dollars of sponsorship for security audits? Prior to this, the largest single investment into FOSS security I'm aware of was the 2015 audit of openssl, after the heartbleed incident. It's hard to find precise costs for that, but I found a few sources estimating 1.2 million dollars, and that is arguably the most security critical piece of software in the world.

                            But suddenly there's 100x more resources available to do this work, now that producing the artifact can be done with stolen labor? Now that they can externalize the cost of false positives onto the already mostly unpaid maintainers of these projects? Even if their claims are true, which we have no reason to believe and very good reason not to, it's still a travesty

                            sci_photos@troet.cafeS datarama@hachyderm.ioD mnl@hachyderm.ioM integerpoet@sfba.socialI yeahyeahyens@det.socialY 5 Replies Last reply
                            1
                            0
                            • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                              So here's the other thing that bothers me about all this. Regardless of the eventual results, this thing they're doing is *incredibly* resource intensive. They routinely spend billions of dollars on training these models, and billions more on operating them. It's not simple to parse out what fraction of that is directly attributable to the massive scale vuln finder/fabricator. But for the sake of argument lets just pick a plausible number, and call it 50-100 million dollars.

                              What could we have gotten for 50-100 million dollars of sponsorship for security audits? Prior to this, the largest single investment into FOSS security I'm aware of was the 2015 audit of openssl, after the heartbleed incident. It's hard to find precise costs for that, but I found a few sources estimating 1.2 million dollars, and that is arguably the most security critical piece of software in the world.

                              But suddenly there's 100x more resources available to do this work, now that producing the artifact can be done with stolen labor? Now that they can externalize the cost of false positives onto the already mostly unpaid maintainers of these projects? Even if their claims are true, which we have no reason to believe and very good reason not to, it's still a travesty

                              sci_photos@troet.cafeS This user is from outside of this forum
                              sci_photos@troet.cafeS This user is from outside of this forum
                              sci_photos@troet.cafe
                              wrote last edited by
                              #56

                              @jenniferplusplus 😔

                              1 Reply Last reply
                              0
                              • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                                So here's the other thing that bothers me about all this. Regardless of the eventual results, this thing they're doing is *incredibly* resource intensive. They routinely spend billions of dollars on training these models, and billions more on operating them. It's not simple to parse out what fraction of that is directly attributable to the massive scale vuln finder/fabricator. But for the sake of argument lets just pick a plausible number, and call it 50-100 million dollars.

                                What could we have gotten for 50-100 million dollars of sponsorship for security audits? Prior to this, the largest single investment into FOSS security I'm aware of was the 2015 audit of openssl, after the heartbleed incident. It's hard to find precise costs for that, but I found a few sources estimating 1.2 million dollars, and that is arguably the most security critical piece of software in the world.

                                But suddenly there's 100x more resources available to do this work, now that producing the artifact can be done with stolen labor? Now that they can externalize the cost of false positives onto the already mostly unpaid maintainers of these projects? Even if their claims are true, which we have no reason to believe and very good reason not to, it's still a travesty

                                datarama@hachyderm.ioD This user is from outside of this forum
                                datarama@hachyderm.ioD This user is from outside of this forum
                                datarama@hachyderm.io
                                wrote last edited by
                                #57

                                @jenniferplusplus 100 million dollars of sponsorship for FOSS project security audits doesn't sell a promise that soon all the humans can be fired.

                                1 Reply Last reply
                                0
                                • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                                  So here's the other thing that bothers me about all this. Regardless of the eventual results, this thing they're doing is *incredibly* resource intensive. They routinely spend billions of dollars on training these models, and billions more on operating them. It's not simple to parse out what fraction of that is directly attributable to the massive scale vuln finder/fabricator. But for the sake of argument lets just pick a plausible number, and call it 50-100 million dollars.

                                  What could we have gotten for 50-100 million dollars of sponsorship for security audits? Prior to this, the largest single investment into FOSS security I'm aware of was the 2015 audit of openssl, after the heartbleed incident. It's hard to find precise costs for that, but I found a few sources estimating 1.2 million dollars, and that is arguably the most security critical piece of software in the world.

                                  But suddenly there's 100x more resources available to do this work, now that producing the artifact can be done with stolen labor? Now that they can externalize the cost of false positives onto the already mostly unpaid maintainers of these projects? Even if their claims are true, which we have no reason to believe and very good reason not to, it's still a travesty

                                  mnl@hachyderm.ioM This user is from outside of this forum
                                  mnl@hachyderm.ioM This user is from outside of this forum
                                  mnl@hachyderm.io
                                  wrote last edited by
                                  #58

                                  @jenniferplusplus while I agree with the "AI companies are mostly full of shit" part, this would be the first kind of announcement like this I am taking semi-seriously.

                                  Here's what's been happening the last couple of months, and this is with _current_ models. There are step functions at play, and I think the step function from "at least some skill needed to wield an LLM to find security issues" to "everybody with a $200 can exploit every OS/browser out there" should be considered very carefully.

                                  Nicholas Carlini saying he found more bugs in 2 weeks than in his entire career with Mythos is not something I can dismiss.

                                  Or daniel stenberg, certainly someone with actual authority and experience compared to me showing the current situation:

                                  daniel:// stenberg:// (@bagder@mastodon.social)

                                  I ran a quick git log grep just now. Over the last ~6 months or so, we have fixed over 200 bugs in #curl found with "AI tools".

                                  favicon

                                  Mastodon (mastodon.social)

                                  daniel:// stenberg:// (@bagder@mastodon.social)

                                  If your Open Source project sees a steep increase in number of high quality security reports (mostly done with AI) right now (#curl, Linux kernel, glibc confirmed) please tell me the name of this project. (I'd like to make a little list for my coming talk on this.)

                                  favicon

                                  Mastodon (mastodon.social)

                                  jenniferplusplus@hachyderm.ioJ 1 Reply Last reply
                                  0
                                  • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                                    So here's the other thing that bothers me about all this. Regardless of the eventual results, this thing they're doing is *incredibly* resource intensive. They routinely spend billions of dollars on training these models, and billions more on operating them. It's not simple to parse out what fraction of that is directly attributable to the massive scale vuln finder/fabricator. But for the sake of argument lets just pick a plausible number, and call it 50-100 million dollars.

                                    What could we have gotten for 50-100 million dollars of sponsorship for security audits? Prior to this, the largest single investment into FOSS security I'm aware of was the 2015 audit of openssl, after the heartbleed incident. It's hard to find precise costs for that, but I found a few sources estimating 1.2 million dollars, and that is arguably the most security critical piece of software in the world.

                                    But suddenly there's 100x more resources available to do this work, now that producing the artifact can be done with stolen labor? Now that they can externalize the cost of false positives onto the already mostly unpaid maintainers of these projects? Even if their claims are true, which we have no reason to believe and very good reason not to, it's still a travesty

                                    integerpoet@sfba.socialI This user is from outside of this forum
                                    integerpoet@sfba.socialI This user is from outside of this forum
                                    integerpoet@sfba.social
                                    wrote last edited by
                                    #59

                                    @jenniferplusplus OpenSSL is important to the world. Software for which a CTO might be held responsible is important to that CTO. There should be more overlap, but there isn’t.

                                    1 Reply Last reply
                                    0
                                    • mnl@hachyderm.ioM mnl@hachyderm.io

                                      @jenniferplusplus while I agree with the "AI companies are mostly full of shit" part, this would be the first kind of announcement like this I am taking semi-seriously.

                                      Here's what's been happening the last couple of months, and this is with _current_ models. There are step functions at play, and I think the step function from "at least some skill needed to wield an LLM to find security issues" to "everybody with a $200 can exploit every OS/browser out there" should be considered very carefully.

                                      Nicholas Carlini saying he found more bugs in 2 weeks than in his entire career with Mythos is not something I can dismiss.

                                      Or daniel stenberg, certainly someone with actual authority and experience compared to me showing the current situation:

                                      daniel:// stenberg:// (@bagder@mastodon.social)

                                      I ran a quick git log grep just now. Over the last ~6 months or so, we have fixed over 200 bugs in #curl found with "AI tools".

                                      favicon

                                      Mastodon (mastodon.social)

                                      daniel:// stenberg:// (@bagder@mastodon.social)

                                      If your Open Source project sees a steep increase in number of high quality security reports (mostly done with AI) right now (#curl, Linux kernel, glibc confirmed) please tell me the name of this project. (I'd like to make a little list for my coming talk on this.)

                                      favicon

                                      Mastodon (mastodon.social)

                                      jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
                                      jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
                                      jenniferplusplus@hachyderm.io
                                      wrote last edited by
                                      #60

                                      @mnl I'm not sure what I'm supposed to do with this. It feels like it's meant to dispute something I'm saying, but this is the same dynamic. The actual cost of operating these tools is 50-100x greater than the vendors are charging, which the vendors are doing in the hope that it eventually becomes an inextricable part of all work, completely eliminating labor as a social power.

                                      Your hypothetical looks very different when it's "everybody with $20,000 (per month) can exploit every browser/os out there." Which is actually true now. It was true 6 months ago. It's been true for as long as we've had software that you could identify vulnerabilities in whatever software you wanted by paying a generous salary to full time researchers.

                                      That's not what capital chose to do. And it bothers me that everyone is just adopting the capitalist framing on every goddamn word these companies spit out, as long as one of those words is AI

                                      mnl@hachyderm.ioM 1 Reply Last reply
                                      0
                                      • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                                        @mnl I'm not sure what I'm supposed to do with this. It feels like it's meant to dispute something I'm saying, but this is the same dynamic. The actual cost of operating these tools is 50-100x greater than the vendors are charging, which the vendors are doing in the hope that it eventually becomes an inextricable part of all work, completely eliminating labor as a social power.

                                        Your hypothetical looks very different when it's "everybody with $20,000 (per month) can exploit every browser/os out there." Which is actually true now. It was true 6 months ago. It's been true for as long as we've had software that you could identify vulnerabilities in whatever software you wanted by paying a generous salary to full time researchers.

                                        That's not what capital chose to do. And it bothers me that everyone is just adopting the capitalist framing on every goddamn word these companies spit out, as long as one of those words is AI

                                        mnl@hachyderm.ioM This user is from outside of this forum
                                        mnl@hachyderm.ioM This user is from outside of this forum
                                        mnl@hachyderm.io
                                        wrote last edited by
                                        #61

                                        @jenniferplusplus I don't think i made a hypothetical? I don't disagree with the rest, but I wouldn't call this announcement bullshit.

                                        I don't think saying that LLMs have gotten scaringly good at finding vulnerabilities (not hypothetical) is adopting the capitalist framing, in fact it's something that as a person supporting opensource and right to privacy, needs to be taken pretty seriously, since we can assume that these tools are in the hands of the government.

                                        There's a fair amount of people (and yes, "AI companies") combining more traditional approaches to vulnerability finding with small models with known externalities to do similar work, one example I could find (I'm not a security's person) as a direct reaction to the mythos announcement: https://aisle.com/blog/ai-cybersecurity-after-mythos-the-jagged-frontier

                                        jenniferplusplus@hachyderm.ioJ 1 Reply Last reply
                                        0
                                        • mnl@hachyderm.ioM mnl@hachyderm.io

                                          @jenniferplusplus I don't think i made a hypothetical? I don't disagree with the rest, but I wouldn't call this announcement bullshit.

                                          I don't think saying that LLMs have gotten scaringly good at finding vulnerabilities (not hypothetical) is adopting the capitalist framing, in fact it's something that as a person supporting opensource and right to privacy, needs to be taken pretty seriously, since we can assume that these tools are in the hands of the government.

                                          There's a fair amount of people (and yes, "AI companies") combining more traditional approaches to vulnerability finding with small models with known externalities to do similar work, one example I could find (I'm not a security's person) as a direct reaction to the mythos announcement: https://aisle.com/blog/ai-cybersecurity-after-mythos-the-jagged-frontier

                                          jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
                                          jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
                                          jenniferplusplus@hachyderm.io
                                          wrote last edited by
                                          #62

                                          @mnl My point is that you're reading these things like a warning, where you should be reading them like a threat

                                          mnl@hachyderm.ioM 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups