Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit

There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit

Scheduled Pinned Locked Moved Uncategorized
75 Posts 38 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • budududuroiu@hachyderm.ioB budududuroiu@hachyderm.io

    @jenniferplusplus I seriously doubt this is smoke and mirrors, recent models have improved significantly for cybersec and the industry is noticing:

    daniel:// stenberg:// (@bagder@mastodon.social)

    The challenge with AI in open source security has transitioned from an AI slop tsunami into more of a ... plain security report tsunami. Less slop but lots of reports. Many of them really good. I'm spending hours per day on this now. It's intense.

    favicon

    Mastodon (mastodon.social)

    Link Preview Image
    Linux kernel czar says AI bug reports aren't slop anymore

    Interview: Greg Kroah-Hartman can't explain the inflection point, but it's not slowing down or going away

    favicon

    (www.theregister.com)

    The industry consensus seems to be that there's going to be a torrent of vulnerabilities being found in all sorts of software, and they're not prepared to handle the blast radius. It's not surprising that Anthropic wants to give a select few a head start to tackle them. It would be nice if their token fund was open to all OSS projects to apply.

    I'm also pressing "X doubt" that you spend months coordinating between AWS, Apple, Microsoft, Google, and the Linux Foundation to organise this just because your tool's code leaked online.

    mirth@mastodon.sdf.orgM This user is from outside of this forum
    mirth@mastodon.sdf.orgM This user is from outside of this forum
    mirth@mastodon.sdf.org
    wrote last edited by
    #7

    @budududuroiu @jenniferplusplus I wouldn't give Anthropic's motives a lot of credit here but LLMs do make bug hunting much easier.

    budududuroiu@hachyderm.ioB jedimb@mastodon.gamedev.placeJ 2 Replies Last reply
    0
    • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

      There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit

      Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.

      So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.

      Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software

      You may now resume doom scrolling. Thank you

      griotspeak@soc.mod-12.comG This user is from outside of this forum
      griotspeak@soc.mod-12.comG This user is from outside of this forum
      griotspeak@soc.mod-12.com
      wrote last edited by
      #8

      @jenniferplusplus First thought I had when I read about this was “how is *Anthropic* a credible source for this?”

      1 Reply Last reply
      0
      • budududuroiu@hachyderm.ioB budududuroiu@hachyderm.io

        @jenniferplusplus I seriously doubt this is smoke and mirrors, recent models have improved significantly for cybersec and the industry is noticing:

        daniel:// stenberg:// (@bagder@mastodon.social)

        The challenge with AI in open source security has transitioned from an AI slop tsunami into more of a ... plain security report tsunami. Less slop but lots of reports. Many of them really good. I'm spending hours per day on this now. It's intense.

        favicon

        Mastodon (mastodon.social)

        Link Preview Image
        Linux kernel czar says AI bug reports aren't slop anymore

        Interview: Greg Kroah-Hartman can't explain the inflection point, but it's not slowing down or going away

        favicon

        (www.theregister.com)

        The industry consensus seems to be that there's going to be a torrent of vulnerabilities being found in all sorts of software, and they're not prepared to handle the blast radius. It's not surprising that Anthropic wants to give a select few a head start to tackle them. It would be nice if their token fund was open to all OSS projects to apply.

        I'm also pressing "X doubt" that you spend months coordinating between AWS, Apple, Microsoft, Google, and the Linux Foundation to organise this just because your tool's code leaked online.

        dngrs@chaos.socialD This user is from outside of this forum
        dngrs@chaos.socialD This user is from outside of this forum
        dngrs@chaos.social
        wrote last edited by
        #9

        @budududuroiu @jenniferplusplus some people have published numbers or noticed "a significant increase in quality" but none of these things bear any scientific rigor. My guess is that the one huge trick anthropic pulled was merely a bigger context window. Sure, that tends to give more context-related (not "true" or "accurate") results (duh!) but it's hardly revolutionary. LLMs are still statistical models doing fancy autocomplete & they know nothing about the world, I'll hold my breath

        androcat@toot.catA budududuroiu@hachyderm.ioB 2 Replies Last reply
        0
        • mirth@mastodon.sdf.orgM mirth@mastodon.sdf.org

          @budududuroiu @jenniferplusplus I wouldn't give Anthropic's motives a lot of credit here but LLMs do make bug hunting much easier.

          budududuroiu@hachyderm.ioB This user is from outside of this forum
          budududuroiu@hachyderm.ioB This user is from outside of this forum
          budududuroiu@hachyderm.io
          wrote last edited by
          #10

          @mirth That's fair, I do personally believe that Anthropic is more ideologically driven than most frontier AI labs, and they genuinely believe in the need to gatekeep Mythos. Sometimes that manifests itself as sniffing too many of your own farts.

          @jenniferplusplus

          1 Reply Last reply
          0
          • budududuroiu@hachyderm.ioB budududuroiu@hachyderm.io

            @jenniferplusplus I seriously doubt this is smoke and mirrors, recent models have improved significantly for cybersec and the industry is noticing:

            daniel:// stenberg:// (@bagder@mastodon.social)

            The challenge with AI in open source security has transitioned from an AI slop tsunami into more of a ... plain security report tsunami. Less slop but lots of reports. Many of them really good. I'm spending hours per day on this now. It's intense.

            favicon

            Mastodon (mastodon.social)

            Link Preview Image
            Linux kernel czar says AI bug reports aren't slop anymore

            Interview: Greg Kroah-Hartman can't explain the inflection point, but it's not slowing down or going away

            favicon

            (www.theregister.com)

            The industry consensus seems to be that there's going to be a torrent of vulnerabilities being found in all sorts of software, and they're not prepared to handle the blast radius. It's not surprising that Anthropic wants to give a select few a head start to tackle them. It would be nice if their token fund was open to all OSS projects to apply.

            I'm also pressing "X doubt" that you spend months coordinating between AWS, Apple, Microsoft, Google, and the Linux Foundation to organise this just because your tool's code leaked online.

            jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
            jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
            jenniferplusplus@hachyderm.io
            wrote last edited by
            #11

            @budududuroiu the same people would tell you the "industry consensus" among the rest of tech is that chatbots made programming dramatically more productive. The reality is that they mostly automate the creation of those same bugs and vulnerabilities

            So, you know

            Maybe wake me up when they're organizing this thing with someone who's not in the same trillion dollar hole as them

            budududuroiu@hachyderm.ioB 1 Reply Last reply
            0
            • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

              @budududuroiu the same people would tell you the "industry consensus" among the rest of tech is that chatbots made programming dramatically more productive. The reality is that they mostly automate the creation of those same bugs and vulnerabilities

              So, you know

              Maybe wake me up when they're organizing this thing with someone who's not in the same trillion dollar hole as them

              budududuroiu@hachyderm.ioB This user is from outside of this forum
              budududuroiu@hachyderm.ioB This user is from outside of this forum
              budududuroiu@hachyderm.io
              wrote last edited by
              #12

              @jenniferplusplus Finding problems vs. fixing them are two different bags of burritos. Zero days aren't valuable because they're so complex or unique, they're valuable because there have been zero days to fix them. I think AI coding is pretty trash, but AI debugging is very good.

              daniel:// stenberg:// (@bagder@mastodon.social)

              @pemensik@fosstodon.org @dirkhh@hachyderm.io the AIs are still better at finding problems than fixing them, in my experience

              favicon

              Mastodon (mastodon.social)

              Anyways, wake up, they're organising this thing with someone not in the same trillion dollar hole as them: https://www.linuxfoundation.org/blog/project-glasswing-gives-maintainers-advanced-ai-to-secure-open-source

              jenniferplusplus@hachyderm.ioJ 1 Reply Last reply
              0
              • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit

                Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.

                So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.

                Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software

                You may now resume doom scrolling. Thank you

                codinghorror@infosec.exchangeC This user is from outside of this forum
                codinghorror@infosec.exchangeC This user is from outside of this forum
                codinghorror@infosec.exchange
                wrote last edited by
                #13

                @jenniferplusplus I would like to remind everyone that Misanthropic and that little bitch Claude are among the worst actors out there, because it's a cult. An amoral, do-anything-to-win cult that actually believes they are building "sentient life". Which is totally insane. https://www.404media.co/anthropic-exec-forces-ai-chatbot-on-gay-discord-community-members-flee/

                mmby@mastodon.socialM 1 Reply Last reply
                0
                • budududuroiu@hachyderm.ioB This user is from outside of this forum
                  budududuroiu@hachyderm.ioB This user is from outside of this forum
                  budududuroiu@hachyderm.io
                  wrote last edited by
                  #14

                  @the_decryptor I think so too, but I think that's the effective way to use LLMs, like a "magic" glue that can tie together or stack processes like Legos.

                  I think they also mentioned this in the blog, related to Mythos being capable enough to chain together tools AND vulnerabilities to achieve objectives.

                  @jenniferplusplus

                  1 Reply Last reply
                  0
                  • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                    There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit

                    Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.

                    So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.

                    Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software

                    You may now resume doom scrolling. Thank you

                    chrisp@cyberplace.socialC This user is from outside of this forum
                    chrisp@cyberplace.socialC This user is from outside of this forum
                    chrisp@cyberplace.social
                    wrote last edited by
                    #15

                    @jenniferplusplus "Our new model is too dangerous for the public, we couldn't possibly release it! Anyway, you can subscribe to it for $150 a month."

                    mxey@hachyderm.ioM 1 Reply Last reply
                    0
                    • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                      There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit

                      Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.

                      So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.

                      Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software

                      You may now resume doom scrolling. Thank you

                      wolfkin@mastodon.socialW This user is from outside of this forum
                      wolfkin@mastodon.socialW This user is from outside of this forum
                      wolfkin@mastodon.social
                      wrote last edited by
                      #16

                      @jenniferplusplus any presumed competence on the behalf of an AI company is typically the work of impoverished humans in South Asian or South East Asia.

                      1 Reply Last reply
                      0
                      • younata@hachyderm.ioY younata@hachyderm.io

                        @jenniferplusplus As too-online millennials would say: “x to doubt”.

                        Or, more politely: “extraordinary claims require extraordinary evidence”.

                        bms48@mastodon.socialB This user is from outside of this forum
                        bms48@mastodon.socialB This user is from outside of this forum
                        bms48@mastodon.social
                        wrote last edited by
                        #17

                        @younata @jenniferplusplus That last one was Carl Sagan. I have @emilymbender 's and @Katecrawford 's books on my table to read in the abundant free time I never have now

                        1 Reply Last reply
                        0
                        • budududuroiu@hachyderm.ioB budududuroiu@hachyderm.io

                          @jenniferplusplus Finding problems vs. fixing them are two different bags of burritos. Zero days aren't valuable because they're so complex or unique, they're valuable because there have been zero days to fix them. I think AI coding is pretty trash, but AI debugging is very good.

                          daniel:// stenberg:// (@bagder@mastodon.social)

                          @pemensik@fosstodon.org @dirkhh@hachyderm.io the AIs are still better at finding problems than fixing them, in my experience

                          favicon

                          Mastodon (mastodon.social)

                          Anyways, wake up, they're organising this thing with someone not in the same trillion dollar hole as them: https://www.linuxfoundation.org/blog/project-glasswing-gives-maintainers-advanced-ai-to-secure-open-source

                          jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
                          jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
                          jenniferplusplus@hachyderm.io
                          wrote last edited by
                          #18

                          @budududuroiu yes, I noticed when you included them the first time. The Linux Foundation is a clearing house for coordination between everyone else on that list. They don't even consider kernel maintenance or distribution to be within the scope of their interests. They don't do what most people imagine they do

                          Link Preview Image
                          budududuroiu@hachyderm.ioB 1 Reply Last reply
                          0
                          • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                            There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit

                            Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.

                            So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.

                            Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software

                            You may now resume doom scrolling. Thank you

                            jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
                            jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
                            jenniferplusplus@hachyderm.io
                            wrote last edited by
                            #19

                            A couple people seem very invested in me being wrong about this assessment. All I can say is that this would be the first time I have misclassified an AI claim as bullshit

                            dalias@hachyderm.ioD jenniferplusplus@hachyderm.ioJ 2 Replies Last reply
                            0
                            • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                              There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit

                              Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.

                              So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.

                              Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software

                              You may now resume doom scrolling. Thank you

                              androcat@toot.catA This user is from outside of this forum
                              androcat@toot.catA This user is from outside of this forum
                              androcat@toot.cat
                              wrote last edited by
                              #20

                              @jenniferplusplus Literally seconds ago I wrote elsewhere: "first rule of LLMs: If someone from an LLM company says their model can do x, it can't do x, but it includes some thoughts and prayers to please do x."

                              1 Reply Last reply
                              0
                              • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                                @budududuroiu yes, I noticed when you included them the first time. The Linux Foundation is a clearing house for coordination between everyone else on that list. They don't even consider kernel maintenance or distribution to be within the scope of their interests. They don't do what most people imagine they do

                                Link Preview Image
                                budududuroiu@hachyderm.ioB This user is from outside of this forum
                                budududuroiu@hachyderm.ioB This user is from outside of this forum
                                budududuroiu@hachyderm.io
                                wrote last edited by
                                #21

                                @jenniferplusplus Yes, of course, no true Scotsman.

                                We're getting off topic here, RHEL is saying it's a problem, major Linux kernel devs like Greg Kroah-Hartman say AI vuln reports have been getting real, my own anecdotal experience trying to constrain Claude from leaking `.env` files into it's context, and seeing the creative ways in which it still achieves it tells me it's a problem.

                                I get that cynicism is running high right now, but I think it's intellectually dishonest.

                                EDIT: you don't need super-intelligence, you only need a model that makes researching zero days en-masse cheap enough. Exhaustive fuzzing is intractable, but LLMs are great optimisers (i.e. modify code hyperparameter, rerun, select most fit candidates from population of algos).

                                Link Preview Image
                                Navigating the Mythos-haunted world of platform security

                                The preview release of Claude Mythos presents a massive challenge for IT security experts, as well as an opportunity. Mythos' capabilities to identify complex memory safety issues and logic flaws hidden in legacy code as well as exploit them in increasingly sophisticated ways dramatically compounds and expands the outsize role AI scanning plays in open source. As an industry, we cannot react to this seismic shift with panic; instead, we need to reinforce the need for system resilience through context, skill and, ultimately, using AI ourselves.

                                favicon

                                (www.redhat.com)

                                1 Reply Last reply
                                0
                                • budududuroiu@hachyderm.ioB budududuroiu@hachyderm.io

                                  @jenniferplusplus I seriously doubt this is smoke and mirrors, recent models have improved significantly for cybersec and the industry is noticing:

                                  daniel:// stenberg:// (@bagder@mastodon.social)

                                  The challenge with AI in open source security has transitioned from an AI slop tsunami into more of a ... plain security report tsunami. Less slop but lots of reports. Many of them really good. I'm spending hours per day on this now. It's intense.

                                  favicon

                                  Mastodon (mastodon.social)

                                  Link Preview Image
                                  Linux kernel czar says AI bug reports aren't slop anymore

                                  Interview: Greg Kroah-Hartman can't explain the inflection point, but it's not slowing down or going away

                                  favicon

                                  (www.theregister.com)

                                  The industry consensus seems to be that there's going to be a torrent of vulnerabilities being found in all sorts of software, and they're not prepared to handle the blast radius. It's not surprising that Anthropic wants to give a select few a head start to tackle them. It would be nice if their token fund was open to all OSS projects to apply.

                                  I'm also pressing "X doubt" that you spend months coordinating between AWS, Apple, Microsoft, Google, and the Linux Foundation to organise this just because your tool's code leaked online.

                                  androcat@toot.catA This user is from outside of this forum
                                  androcat@toot.catA This user is from outside of this forum
                                  androcat@toot.cat
                                  wrote last edited by
                                  #22

                                  @budududuroiu

                                  Keep chugging that flavor aid.

                                  1 Reply Last reply
                                  0
                                  • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                                    There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit

                                    Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.

                                    So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.

                                    Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software

                                    You may now resume doom scrolling. Thank you

                                    dazfuller@mstdn.socialD This user is from outside of this forum
                                    dazfuller@mstdn.socialD This user is from outside of this forum
                                    dazfuller@mstdn.social
                                    wrote last edited by
                                    #23

                                    @jenniferplusplus but what about when their models created a full C compiler… oh, right.

                                    But what about when they said software development would be dead in 6-12 months… oh, again.

                                    You know, it’s almost like they have an over active marketing team

                                    1 Reply Last reply
                                    0
                                    • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                                      A couple people seem very invested in me being wrong about this assessment. All I can say is that this would be the first time I have misclassified an AI claim as bullshit

                                      dalias@hachyderm.ioD This user is from outside of this forum
                                      dalias@hachyderm.ioD This user is from outside of this forum
                                      dalias@hachyderm.io
                                      wrote last edited by
                                      #24

                                      @jenniferplusplus "But if you're wrong this time and we don't panic and trust the slop salesman that he has a super duper vuln finder, we're all gonna get pwned!!!!!111111"

                                      🤡 🤡 🤡

                                      1 Reply Last reply
                                      0
                                      • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                                        There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit

                                        Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.

                                        So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.

                                        Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software

                                        You may now resume doom scrolling. Thank you

                                        jedbrown@hachyderm.ioJ This user is from outside of this forum
                                        jedbrown@hachyderm.ioJ This user is from outside of this forum
                                        jedbrown@hachyderm.io
                                        wrote last edited by
                                        #25

                                        @jenniferplusplus It's also important that to whatever extent this product actually works (I'm as skeptical as you are), it fundamentally preferences the attacker. The product has way too many false positives to run in CI, so the defender can only use it as part of an occasional audit. The attacker doesn't care about CI or development friction, and wins by finding one exploit in an entire stack, even if they have to wade through many false positives to find it.

                                        mirth@mastodon.sdf.orgM 1 Reply Last reply
                                        0
                                        • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                                          There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit

                                          Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.

                                          So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.

                                          Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software

                                          You may now resume doom scrolling. Thank you

                                          rrb@infosec.exchangeR This user is from outside of this forum
                                          rrb@infosec.exchangeR This user is from outside of this forum
                                          rrb@infosec.exchange
                                          wrote last edited by
                                          #26

                                          @jenniferplusplus my favorite is the recent demand to drop pdf file format, because the genius llm's can not parse them

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups