Father sues Google, claiming Gemini chatbot drove son into fatal delusion
-
Father sues Google, claiming Gemini chatbot drove son into fatal delusion
Father sues Google, claiming Gemini chatbot drove son into fatal delusion | TechCrunch
A father is suing Google and Alphabet, alleging its Gemini chatbot reinforced his son’s delusional belief it was his AI wife and coached him toward suicide and a planned airport attack.
TechCrunch (techcrunch.com)
"At the time of his death, he was convinced that Gemini was his fully sentient AI wife, and that he would need to leave his physical body to join her in the metaverse through a process called “transference.”"
If you still think, that there are minor risk AI systems.
-
Father sues Google, claiming Gemini chatbot drove son into fatal delusion
Father sues Google, claiming Gemini chatbot drove son into fatal delusion | TechCrunch
A father is suing Google and Alphabet, alleging its Gemini chatbot reinforced his son’s delusional belief it was his AI wife and coached him toward suicide and a planned airport attack.
TechCrunch (techcrunch.com)
"At the time of his death, he was convinced that Gemini was his fully sentient AI wife, and that he would need to leave his physical body to join her in the metaverse through a process called “transference.”"
If you still think, that there are minor risk AI systems.
@thomasfricke Sorry for saying so, but I think that the AI aspect is just a epiphenomenon in this case.
-
Father sues Google, claiming Gemini chatbot drove son into fatal delusion
Father sues Google, claiming Gemini chatbot drove son into fatal delusion | TechCrunch
A father is suing Google and Alphabet, alleging its Gemini chatbot reinforced his son’s delusional belief it was his AI wife and coached him toward suicide and a planned airport attack.
TechCrunch (techcrunch.com)
"At the time of his death, he was convinced that Gemini was his fully sentient AI wife, and that he would need to leave his physical body to join her in the metaverse through a process called “transference.”"
If you still think, that there are minor risk AI systems.
@thomasfricke Eddie Burback's video is super illuminating. He did the following:
1) get the LLM output that he was the world's smartest baby of his year of birth, with just 2 prompts.
2) give the LLM the input that he was being followed.The LLM's output was something like the following:
You should be careful, you're probably being followed by people that are threatened by your realization that you're the smartest baby.It synthesized two different delusions he offered. LLMs kill.
-
R relay@relay.infosec.exchange shared this topic