> The leak, which Meta confirmed, happened when an employee asked for guidance on an engineering problem on an internal forum.
-
> The leak, which Meta confirmed, happened when an employee asked for guidance on an engineering problem on an internal forum. An AI agent responded with a solution, which the employee implemented – causing a large amount of sensitive user and company data to be exposed to its engineers for two hours.
lol and - furthermore - lmao
Meta AI agent’s instruction causes large sensitive data leak to employees
Artificial intelligence agent instructed engineer to take actions that exposed user and company data internally
the Guardian (www.theguardian.com)
@davidgerard this is how The Facebook was made, business as usual
-
> The leak, which Meta confirmed, happened when an employee asked for guidance on an engineering problem on an internal forum. An AI agent responded with a solution, which the employee implemented – causing a large amount of sensitive user and company data to be exposed to its engineers for two hours.
lol and - furthermore - lmao
Meta AI agent’s instruction causes large sensitive data leak to employees
Artificial intelligence agent instructed engineer to take actions that exposed user and company data internally
the Guardian (www.theguardian.com)
@davidgerard a friend of mine caused an incident at fb when he removed an incredible amount of duplicated vendored code ostensibly because they have an ML-based packaging tool that suddenly failed in response to a much smaller input. one issue with vendored code is that changes to it are not really detectable; the second issue is that you can't update it for security fixes.
i mention this because facebook has very frequently spoken of how security needs to be the default and tooling built to make it easier to write secure code; sure, it's facebook, perhaps best to ignore that. but there should be no way a single change makes this possible in the first place. twitter was under a 10-year FTC consent decree for failing to sufficiently protect user data (they lied about this to their engineers). accessing user data is not something a single code change can achieve unless user data is already visible to insufficiently permissioned services.
the point is this sounds like a great thing to leak to the press if you believe your sneaky code path is about to get burned by a whistleblower. it also serves as an explanation to their own employees. stochastic parrot can't generate a cryptographic key and any security engineer would know this. what this does say is that the regulatory environment is sufficiently dead in the water that they feel safe to leak criminal neglect to the press.
-
@davidgerard a friend of mine caused an incident at fb when he removed an incredible amount of duplicated vendored code ostensibly because they have an ML-based packaging tool that suddenly failed in response to a much smaller input. one issue with vendored code is that changes to it are not really detectable; the second issue is that you can't update it for security fixes.
i mention this because facebook has very frequently spoken of how security needs to be the default and tooling built to make it easier to write secure code; sure, it's facebook, perhaps best to ignore that. but there should be no way a single change makes this possible in the first place. twitter was under a 10-year FTC consent decree for failing to sufficiently protect user data (they lied about this to their engineers). accessing user data is not something a single code change can achieve unless user data is already visible to insufficiently permissioned services.
the point is this sounds like a great thing to leak to the press if you believe your sneaky code path is about to get burned by a whistleblower. it also serves as an explanation to their own employees. stochastic parrot can't generate a cryptographic key and any security engineer would know this. what this does say is that the regulatory environment is sufficiently dead in the water that they feel safe to leak criminal neglect to the press.
@davidgerard i mention vendored code because google does the code vendoring too and it's an easy way for someone to hide vulnerabilities from auditors as well as their own employees, which is one plausible interpretation of this leak
-
> The leak, which Meta confirmed, happened when an employee asked for guidance on an engineering problem on an internal forum. An AI agent responded with a solution, which the employee implemented – causing a large amount of sensitive user and company data to be exposed to its engineers for two hours.
lol and - furthermore - lmao
Meta AI agent’s instruction causes large sensitive data leak to employees
Artificial intelligence agent instructed engineer to take actions that exposed user and company data internally
the Guardian (www.theguardian.com)
@davidgerard and let me ask you, who wears the risk, liability, and consequences here given the corporate push to use AI?
I hope the employee doesn’t suffer any consequences (above the background radiation of consequences any Meta employee should suffer).
-
R relay@relay.an.exchange shared this topic
-
> The leak, which Meta confirmed, happened when an employee asked for guidance on an engineering problem on an internal forum. An AI agent responded with a solution, which the employee implemented – causing a large amount of sensitive user and company data to be exposed to its engineers for two hours.
lol and - furthermore - lmao
Meta AI agent’s instruction causes large sensitive data leak to employees
Artificial intelligence agent instructed engineer to take actions that exposed user and company data internally
the Guardian (www.theguardian.com)
@davidgerard I wonder if that $64 million to boost election candidates against the regulation of AI seems like such a good idea now, Mark.

-
@davidgerard a friend of mine caused an incident at fb when he removed an incredible amount of duplicated vendored code ostensibly because they have an ML-based packaging tool that suddenly failed in response to a much smaller input. one issue with vendored code is that changes to it are not really detectable; the second issue is that you can't update it for security fixes.
i mention this because facebook has very frequently spoken of how security needs to be the default and tooling built to make it easier to write secure code; sure, it's facebook, perhaps best to ignore that. but there should be no way a single change makes this possible in the first place. twitter was under a 10-year FTC consent decree for failing to sufficiently protect user data (they lied about this to their engineers). accessing user data is not something a single code change can achieve unless user data is already visible to insufficiently permissioned services.
the point is this sounds like a great thing to leak to the press if you believe your sneaky code path is about to get burned by a whistleblower. it also serves as an explanation to their own employees. stochastic parrot can't generate a cryptographic key and any security engineer would know this. what this does say is that the regulatory environment is sufficiently dead in the water that they feel safe to leak criminal neglect to the press.
@hipsterelectron I feel like you may have buried the lede in this post..
-
> The leak, which Meta confirmed, happened when an employee asked for guidance on an engineering problem on an internal forum. An AI agent responded with a solution, which the employee implemented – causing a large amount of sensitive user and company data to be exposed to its engineers for two hours.
lol and - furthermore - lmao
Meta AI agent’s instruction causes large sensitive data leak to employees
Artificial intelligence agent instructed engineer to take actions that exposed user and company data internally
the Guardian (www.theguardian.com)
@davidgerard My work banned me from agentive AI because I know too much... they are scared something like this would happen and they are right.
-
> The leak, which Meta confirmed, happened when an employee asked for guidance on an engineering problem on an internal forum. An AI agent responded with a solution, which the employee implemented – causing a large amount of sensitive user and company data to be exposed to its engineers for two hours.
lol and - furthermore - lmao
Meta AI agent’s instruction causes large sensitive data leak to employees
Artificial intelligence agent instructed engineer to take actions that exposed user and company data internally
the Guardian (www.theguardian.com)
@davidgerard
One wonders whether the engineer knew in advance that the response was non-human? -
@davidgerard a friend of mine caused an incident at fb when he removed an incredible amount of duplicated vendored code ostensibly because they have an ML-based packaging tool that suddenly failed in response to a much smaller input. one issue with vendored code is that changes to it are not really detectable; the second issue is that you can't update it for security fixes.
i mention this because facebook has very frequently spoken of how security needs to be the default and tooling built to make it easier to write secure code; sure, it's facebook, perhaps best to ignore that. but there should be no way a single change makes this possible in the first place. twitter was under a 10-year FTC consent decree for failing to sufficiently protect user data (they lied about this to their engineers). accessing user data is not something a single code change can achieve unless user data is already visible to insufficiently permissioned services.
the point is this sounds like a great thing to leak to the press if you believe your sneaky code path is about to get burned by a whistleblower. it also serves as an explanation to their own employees. stochastic parrot can't generate a cryptographic key and any security engineer would know this. what this does say is that the regulatory environment is sufficiently dead in the water that they feel safe to leak criminal neglect to the press.
@hipsterelectron @davidgerard well said!
-
@davidgerard
One wonders whether the engineer knew in advance that the response was non-human?@AlisonW @davidgerard If not, it seems very much like we've made a silicon version of The Thing. And are now trying to get it to run everything, with predictably disastrous results.
-
> The leak, which Meta confirmed, happened when an employee asked for guidance on an engineering problem on an internal forum. An AI agent responded with a solution, which the employee implemented – causing a large amount of sensitive user and company data to be exposed to its engineers for two hours.
lol and - furthermore - lmao
Meta AI agent’s instruction causes large sensitive data leak to employees
Artificial intelligence agent instructed engineer to take actions that exposed user and company data internally
the Guardian (www.theguardian.com)
The *Now how much will you pay for crowd that seems at least within Microsoft to be experiencing austerity because tokens cost too much
-
@davidgerard I wonder if that $64 million to boost election candidates against the regulation of AI seems like such a good idea now, Mark.

@JustinMac84 @davidgerard sure, because now fuck ups have no consequences for them...
-
> The leak, which Meta confirmed, happened when an employee asked for guidance on an engineering problem on an internal forum. An AI agent responded with a solution, which the employee implemented – causing a large amount of sensitive user and company data to be exposed to its engineers for two hours.
lol and - furthermore - lmao
Meta AI agent’s instruction causes large sensitive data leak to employees
Artificial intelligence agent instructed engineer to take actions that exposed user and company data internally
the Guardian (www.theguardian.com)
@davidgerard wtf I love AI now
-
@AlisonW @davidgerard If not, it seems very much like we've made a silicon version of The Thing. And are now trying to get it to run everything, with predictably disastrous results.
@Soozcat @davidgerard
It seems to me that you have made an entirely accurate statement of fact.
-
R relay@relay.infosec.exchange shared this topic