Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. > The leak, which Meta confirmed, happened when an employee asked for guidance on an engineering problem on an internal forum.

> The leak, which Meta confirmed, happened when an employee asked for guidance on an engineering problem on an internal forum.

Scheduled Pinned Locked Moved Uncategorized
15 Posts 13 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • davidgerard@circumstances.runD davidgerard@circumstances.run

    > The leak, which Meta confirmed, happened when an employee asked for guidance on an engineering problem on an internal forum. An AI agent responded with a solution, which the employee implemented – causing a large amount of sensitive user and company data to be exposed to its engineers for two hours.

    lol and - furthermore - lmao

    Link Preview Image
    Meta AI agent’s instruction causes large sensitive data leak to employees

    Artificial intelligence agent instructed engineer to take actions that exposed user and company data internally

    favicon

    the Guardian (www.theguardian.com)

    hipsterelectron@circumstances.runH This user is from outside of this forum
    hipsterelectron@circumstances.runH This user is from outside of this forum
    hipsterelectron@circumstances.run
    wrote last edited by
    #3

    @davidgerard a friend of mine caused an incident at fb when he removed an incredible amount of duplicated vendored code ostensibly because they have an ML-based packaging tool that suddenly failed in response to a much smaller input. one issue with vendored code is that changes to it are not really detectable; the second issue is that you can't update it for security fixes.

    i mention this because facebook has very frequently spoken of how security needs to be the default and tooling built to make it easier to write secure code; sure, it's facebook, perhaps best to ignore that. but there should be no way a single change makes this possible in the first place. twitter was under a 10-year FTC consent decree for failing to sufficiently protect user data (they lied about this to their engineers). accessing user data is not something a single code change can achieve unless user data is already visible to insufficiently permissioned services.

    the point is this sounds like a great thing to leak to the press if you believe your sneaky code path is about to get burned by a whistleblower. it also serves as an explanation to their own employees. stochastic parrot can't generate a cryptographic key and any security engineer would know this. what this does say is that the regulatory environment is sufficiently dead in the water that they feel safe to leak criminal neglect to the press.

    hipsterelectron@circumstances.runH clickhere@mastodon.ieC saxnot@chaos.socialS 3 Replies Last reply
    0
    • hipsterelectron@circumstances.runH hipsterelectron@circumstances.run

      @davidgerard a friend of mine caused an incident at fb when he removed an incredible amount of duplicated vendored code ostensibly because they have an ML-based packaging tool that suddenly failed in response to a much smaller input. one issue with vendored code is that changes to it are not really detectable; the second issue is that you can't update it for security fixes.

      i mention this because facebook has very frequently spoken of how security needs to be the default and tooling built to make it easier to write secure code; sure, it's facebook, perhaps best to ignore that. but there should be no way a single change makes this possible in the first place. twitter was under a 10-year FTC consent decree for failing to sufficiently protect user data (they lied about this to their engineers). accessing user data is not something a single code change can achieve unless user data is already visible to insufficiently permissioned services.

      the point is this sounds like a great thing to leak to the press if you believe your sneaky code path is about to get burned by a whistleblower. it also serves as an explanation to their own employees. stochastic parrot can't generate a cryptographic key and any security engineer would know this. what this does say is that the regulatory environment is sufficiently dead in the water that they feel safe to leak criminal neglect to the press.

      hipsterelectron@circumstances.runH This user is from outside of this forum
      hipsterelectron@circumstances.runH This user is from outside of this forum
      hipsterelectron@circumstances.run
      wrote last edited by
      #4

      @davidgerard i mention vendored code because google does the code vendoring too and it's an easy way for someone to hide vulnerabilities from auditors as well as their own employees, which is one plausible interpretation of this leak

      1 Reply Last reply
      0
      • davidgerard@circumstances.runD davidgerard@circumstances.run

        > The leak, which Meta confirmed, happened when an employee asked for guidance on an engineering problem on an internal forum. An AI agent responded with a solution, which the employee implemented – causing a large amount of sensitive user and company data to be exposed to its engineers for two hours.

        lol and - furthermore - lmao

        Link Preview Image
        Meta AI agent’s instruction causes large sensitive data leak to employees

        Artificial intelligence agent instructed engineer to take actions that exposed user and company data internally

        favicon

        the Guardian (www.theguardian.com)

        europlus@social.europlus.zoneE This user is from outside of this forum
        europlus@social.europlus.zoneE This user is from outside of this forum
        europlus@social.europlus.zone
        wrote last edited by
        #5

        @davidgerard and let me ask you, who wears the risk, liability, and consequences here given the corporate push to use AI?

        I hope the employee doesn’t suffer any consequences (above the background radiation of consequences any Meta employee should suffer).

        1 Reply Last reply
        0
        • R relay@relay.an.exchange shared this topic
        • davidgerard@circumstances.runD davidgerard@circumstances.run

          > The leak, which Meta confirmed, happened when an employee asked for guidance on an engineering problem on an internal forum. An AI agent responded with a solution, which the employee implemented – causing a large amount of sensitive user and company data to be exposed to its engineers for two hours.

          lol and - furthermore - lmao

          Link Preview Image
          Meta AI agent’s instruction causes large sensitive data leak to employees

          Artificial intelligence agent instructed engineer to take actions that exposed user and company data internally

          favicon

          the Guardian (www.theguardian.com)

          J This user is from outside of this forum
          J This user is from outside of this forum
          justinmac84@mastodon.social
          wrote last edited by
          #6

          @davidgerard I wonder if that $64 million to boost election candidates against the regulation of AI seems like such a good idea now, Mark. 🤔

          D 1 Reply Last reply
          0
          • hipsterelectron@circumstances.runH hipsterelectron@circumstances.run

            @davidgerard a friend of mine caused an incident at fb when he removed an incredible amount of duplicated vendored code ostensibly because they have an ML-based packaging tool that suddenly failed in response to a much smaller input. one issue with vendored code is that changes to it are not really detectable; the second issue is that you can't update it for security fixes.

            i mention this because facebook has very frequently spoken of how security needs to be the default and tooling built to make it easier to write secure code; sure, it's facebook, perhaps best to ignore that. but there should be no way a single change makes this possible in the first place. twitter was under a 10-year FTC consent decree for failing to sufficiently protect user data (they lied about this to their engineers). accessing user data is not something a single code change can achieve unless user data is already visible to insufficiently permissioned services.

            the point is this sounds like a great thing to leak to the press if you believe your sneaky code path is about to get burned by a whistleblower. it also serves as an explanation to their own employees. stochastic parrot can't generate a cryptographic key and any security engineer would know this. what this does say is that the regulatory environment is sufficiently dead in the water that they feel safe to leak criminal neglect to the press.

            clickhere@mastodon.ieC This user is from outside of this forum
            clickhere@mastodon.ieC This user is from outside of this forum
            clickhere@mastodon.ie
            wrote last edited by
            #7

            @hipsterelectron I feel like you may have buried the lede in this post..

            @davidgerard

            1 Reply Last reply
            0
            • davidgerard@circumstances.runD davidgerard@circumstances.run

              > The leak, which Meta confirmed, happened when an employee asked for guidance on an engineering problem on an internal forum. An AI agent responded with a solution, which the employee implemented – causing a large amount of sensitive user and company data to be exposed to its engineers for two hours.

              lol and - furthermore - lmao

              Link Preview Image
              Meta AI agent’s instruction causes large sensitive data leak to employees

              Artificial intelligence agent instructed engineer to take actions that exposed user and company data internally

              favicon

              the Guardian (www.theguardian.com)

              shwell@mastodon.auS This user is from outside of this forum
              shwell@mastodon.auS This user is from outside of this forum
              shwell@mastodon.au
              wrote last edited by
              #8

              @davidgerard My work banned me from agentive AI because I know too much... they are scared something like this would happen and they are right.

              1 Reply Last reply
              0
              • davidgerard@circumstances.runD davidgerard@circumstances.run

                > The leak, which Meta confirmed, happened when an employee asked for guidance on an engineering problem on an internal forum. An AI agent responded with a solution, which the employee implemented – causing a large amount of sensitive user and company data to be exposed to its engineers for two hours.

                lol and - furthermore - lmao

                Link Preview Image
                Meta AI agent’s instruction causes large sensitive data leak to employees

                Artificial intelligence agent instructed engineer to take actions that exposed user and company data internally

                favicon

                the Guardian (www.theguardian.com)

                alisonw@fedimon.ukA This user is from outside of this forum
                alisonw@fedimon.ukA This user is from outside of this forum
                alisonw@fedimon.uk
                wrote last edited by
                #9

                @davidgerard
                One wonders whether the engineer knew in advance that the response was non-human?

                soozcat@vmst.ioS 1 Reply Last reply
                0
                • hipsterelectron@circumstances.runH hipsterelectron@circumstances.run

                  @davidgerard a friend of mine caused an incident at fb when he removed an incredible amount of duplicated vendored code ostensibly because they have an ML-based packaging tool that suddenly failed in response to a much smaller input. one issue with vendored code is that changes to it are not really detectable; the second issue is that you can't update it for security fixes.

                  i mention this because facebook has very frequently spoken of how security needs to be the default and tooling built to make it easier to write secure code; sure, it's facebook, perhaps best to ignore that. but there should be no way a single change makes this possible in the first place. twitter was under a 10-year FTC consent decree for failing to sufficiently protect user data (they lied about this to their engineers). accessing user data is not something a single code change can achieve unless user data is already visible to insufficiently permissioned services.

                  the point is this sounds like a great thing to leak to the press if you believe your sneaky code path is about to get burned by a whistleblower. it also serves as an explanation to their own employees. stochastic parrot can't generate a cryptographic key and any security engineer would know this. what this does say is that the regulatory environment is sufficiently dead in the water that they feel safe to leak criminal neglect to the press.

                  saxnot@chaos.socialS This user is from outside of this forum
                  saxnot@chaos.socialS This user is from outside of this forum
                  saxnot@chaos.social
                  wrote last edited by
                  #10

                  @hipsterelectron @davidgerard well said!

                  1 Reply Last reply
                  0
                  • alisonw@fedimon.ukA alisonw@fedimon.uk

                    @davidgerard
                    One wonders whether the engineer knew in advance that the response was non-human?

                    soozcat@vmst.ioS This user is from outside of this forum
                    soozcat@vmst.ioS This user is from outside of this forum
                    soozcat@vmst.io
                    wrote last edited by
                    #11

                    @AlisonW @davidgerard If not, it seems very much like we've made a silicon version of The Thing. And are now trying to get it to run everything, with predictably disastrous results.

                    alisonw@fedimon.ukA 1 Reply Last reply
                    0
                    • davidgerard@circumstances.runD davidgerard@circumstances.run

                      > The leak, which Meta confirmed, happened when an employee asked for guidance on an engineering problem on an internal forum. An AI agent responded with a solution, which the employee implemented – causing a large amount of sensitive user and company data to be exposed to its engineers for two hours.

                      lol and - furthermore - lmao

                      Link Preview Image
                      Meta AI agent’s instruction causes large sensitive data leak to employees

                      Artificial intelligence agent instructed engineer to take actions that exposed user and company data internally

                      favicon

                      the Guardian (www.theguardian.com)

                      ghostonthehalfshell@masto.aiG This user is from outside of this forum
                      ghostonthehalfshell@masto.aiG This user is from outside of this forum
                      ghostonthehalfshell@masto.ai
                      wrote last edited by
                      #12

                      @davidgerard

                      The *Now how much will you pay for crowd that seems at least within Microsoft to be experiencing austerity because tokens cost too much

                      1 Reply Last reply
                      0
                      • J justinmac84@mastodon.social

                        @davidgerard I wonder if that $64 million to boost election candidates against the regulation of AI seems like such a good idea now, Mark. 🤔

                        D This user is from outside of this forum
                        D This user is from outside of this forum
                        drchaos@sauropods.win
                        wrote last edited by
                        #13

                        @JustinMac84 @davidgerard sure, because now fuck ups have no consequences for them...

                        1 Reply Last reply
                        0
                        • davidgerard@circumstances.runD davidgerard@circumstances.run

                          > The leak, which Meta confirmed, happened when an employee asked for guidance on an engineering problem on an internal forum. An AI agent responded with a solution, which the employee implemented – causing a large amount of sensitive user and company data to be exposed to its engineers for two hours.

                          lol and - furthermore - lmao

                          Link Preview Image
                          Meta AI agent’s instruction causes large sensitive data leak to employees

                          Artificial intelligence agent instructed engineer to take actions that exposed user and company data internally

                          favicon

                          the Guardian (www.theguardian.com)

                          uint8_t@chaos.socialU This user is from outside of this forum
                          uint8_t@chaos.socialU This user is from outside of this forum
                          uint8_t@chaos.social
                          wrote last edited by
                          #14

                          @davidgerard wtf I love AI now

                          1 Reply Last reply
                          0
                          • soozcat@vmst.ioS soozcat@vmst.io

                            @AlisonW @davidgerard If not, it seems very much like we've made a silicon version of The Thing. And are now trying to get it to run everything, with predictably disastrous results.

                            alisonw@fedimon.ukA This user is from outside of this forum
                            alisonw@fedimon.ukA This user is from outside of this forum
                            alisonw@fedimon.uk
                            wrote last edited by
                            #15

                            @Soozcat @davidgerard
                            It seems to me that you have made an entirely accurate statement of fact. 😥

                            1 Reply Last reply
                            0
                            • R relay@relay.infosec.exchange shared this topic
                            Reply
                            • Reply as topic
                            Log in to reply
                            • Oldest to Newest
                            • Newest to Oldest
                            • Most Votes


                            • Login

                            • Login or register to search.
                            • First post
                              Last post
                            0
                            • Categories
                            • Recent
                            • Tags
                            • Popular
                            • World
                            • Users
                            • Groups