“A chatbot did not kill those children.
-
“A chatbot did not kill those children. People failed to update a database, and other people built a system fast enough to make that failure lethal.”
Maven, the program Google was forced to pull out of and Palantir took over, was the system that identified the girls’ school as a target.
AI got the blame for the Iran school bombing. The truth is far more worrying
LLMs-gone-rogue dominated coverage, but had nothing to do with the targeting. Instead, it was choices made by human beings, over many years, that gave us this atrocity
the Guardian (www.theguardian.com)
-
“A chatbot did not kill those children. People failed to update a database, and other people built a system fast enough to make that failure lethal.”
Maven, the program Google was forced to pull out of and Palantir took over, was the system that identified the girls’ school as a target.
AI got the blame for the Iran school bombing. The truth is far more worrying
LLMs-gone-rogue dominated coverage, but had nothing to do with the targeting. Instead, it was choices made by human beings, over many years, that gave us this atrocity
the Guardian (www.theguardian.com)
@parismarx a country with a historical record for murdering children decided to murder more children.
-
“A chatbot did not kill those children. People failed to update a database, and other people built a system fast enough to make that failure lethal.”
Maven, the program Google was forced to pull out of and Palantir took over, was the system that identified the girls’ school as a target.
AI got the blame for the Iran school bombing. The truth is far more worrying
LLMs-gone-rogue dominated coverage, but had nothing to do with the targeting. Instead, it was choices made by human beings, over many years, that gave us this atrocity
the Guardian (www.theguardian.com)
@parismarx
People who allowed a chat bot to choose missile targets in a nation that hasn’t attacked us killed those girls. -
“A chatbot did not kill those children. People failed to update a database, and other people built a system fast enough to make that failure lethal.”
Maven, the program Google was forced to pull out of and Palantir took over, was the system that identified the girls’ school as a target.
AI got the blame for the Iran school bombing. The truth is far more worrying
LLMs-gone-rogue dominated coverage, but had nothing to do with the targeting. Instead, it was choices made by human beings, over many years, that gave us this atrocity
the Guardian (www.theguardian.com)
@parismarx I very much think that the LLM is an excuse for bombing the targets they were going to take out anyway.
-
“A chatbot did not kill those children. People failed to update a database, and other people built a system fast enough to make that failure lethal.”
Maven, the program Google was forced to pull out of and Palantir took over, was the system that identified the girls’ school as a target.
AI got the blame for the Iran school bombing. The truth is far more worrying
LLMs-gone-rogue dominated coverage, but had nothing to do with the targeting. Instead, it was choices made by human beings, over many years, that gave us this atrocity
the Guardian (www.theguardian.com)
@parismarx Accountability laundering is not a new problem in tech,¹ but LLMs are making it a thousand times worse because they are specifically designed to mimicry humans and human agency.
¹ Remember how Facebook claimed it wasn't their fault Cambridge Analytica collected personal data of hundreds of millions of people and used it to convince entire countries to vote for fascists? They claimed they told CA to delete the data, but let them keep the models trained on it.
-
R relay@relay.publicsquare.global shared this topic
-
“A chatbot did not kill those children. People failed to update a database, and other people built a system fast enough to make that failure lethal.”
Maven, the program Google was forced to pull out of and Palantir took over, was the system that identified the girls’ school as a target.
AI got the blame for the Iran school bombing. The truth is far more worrying
LLMs-gone-rogue dominated coverage, but had nothing to do with the targeting. Instead, it was choices made by human beings, over many years, that gave us this atrocity
the Guardian (www.theguardian.com)
In-depth article detailing how the "kill chain" i.e. target acquisition and mission execution, has degraded in quality due to increased "efficiency" demands--and how that mindset is responsible for this tragedy.
https://artificialbureaucracy.substack.com/p/kill-chain
also, this is what we can expect more of (missed and misdirected hits), as "ai" is increasingly integrated into targeting evaluation.
-
R relay@relay.mycrowd.ca shared this topic