this study is actually brilliant, love it!
-
this study is actually brilliant, love it! https://www.theregister.com/2026/02/18/generating_passwords_with_llms/
-
this study is actually brilliant, love it! https://www.theregister.com/2026/02/18/generating_passwords_with_llms/
the problem is that everyone prompting LLMs is doing it in isolation so it **seems** like the results are unique, but they're not, especially for common prompts like "generate me a password." this is also why it's a terrible idea to ask ChatGPT to write you a cover letter-- it's probably gonna sound suspiciously like other people's cover letters!
-
the problem is that everyone prompting LLMs is doing it in isolation so it **seems** like the results are unique, but they're not, especially for common prompts like "generate me a password." this is also why it's a terrible idea to ask ChatGPT to write you a cover letter-- it's probably gonna sound suspiciously like other people's cover letters!
maybe we could call this the one-to-many-to-one problem: a common prompt scenario (one) leading to multiple results (many) that get exposed as LLM output because of their similarity because they are all directed to a common recipient (one).
I imagine school teachers have to deal with this scenario all the time.
-
this study is actually brilliant, love it! https://www.theregister.com/2026/02/18/generating_passwords_with_llms/
-
this study is actually brilliant, love it! https://www.theregister.com/2026/02/18/generating_passwords_with_llms/
@peter ...wow
-
this study is actually brilliant, love it! https://www.theregister.com/2026/02/18/generating_passwords_with_llms/
That reminds me of YouTubers who have incorporated LLMs in their schtick to the point that they ask them for “a random number between 1 and 6” when performing the “I let the LLM choose what menu I would order” or whatever. It doesn't work either but at least it's only entertainment.
-
this study is actually brilliant, love it! https://www.theregister.com/2026/02/18/generating_passwords_with_llms/
@peter @shafik 1. I think this is easily solved by just giving the LLM access to a function that gives it random if it wants. (And I think it would be a good idea to do so because clearly we have seen real users ask LLMs for random passwords. Maybe even give the LLM zxcvbn for extra spiciness)
2. (1) should not be understood to mean the people saying "of course it's random, AI did it" deserve anything but derision. -
R relay@relay.infosec.exchange shared this topic