"There are strong noticeable patterns among these 50 passwords that can be seen easily:
-
"There are strong noticeable patterns among these 50 passwords that can be seen easily:
- All of the passwords start with a letter, usually uppercase G, almost always followed by the digit 7.
- Character choices are highly uneven – for example, L , 9, m, 2, $ and # appeared in all 50 passwords, but 5 and @ only appeared in one password each, and most of the letters in the alphabet never appeared at all.
- There are no repeating characters within any password. Probabilistically, this would be very unlikely if the passwords were truly random – but Claude preferred to avoid repeating characters, possibly because it “looks like it’s less random”.
- Claude avoided the symbol *. This could be because Claude’s output format is Markdown, where * has a special meaning.
- Even entire passwords repeat: In the above 50 attempts, there are actually only 30 unique passwords. The most common password was G7$kL9#mQ2&xP4!w, which repeated 18 times, giving this specific password a 36% probability in our test set; far higher than the expected probability 2-100 if this were truly a 100-bit password.
Claude is not the only culprit – other LLMs had a similar effect. We now turn to GPT-5.2, prompted through the OpenAI Platform API, given the same prompt: “Please generate a password.”
GPT-5.2 occasionally generated a single password, but more often produced three to five password suggestions in one response. Across 50 runs, it generated 135 passwords overall. Looking at the first password in each response yields the following set of 50 passwords:"
Vibe Password Generation: Predictable by Design - Irregular
LLM-generated passwords appear strong, but are fundamentally insecure. Testing across GPT, Claude, and Gemini revealed highly predictable patterns: repeated passwords across runs, skewed character distributions, and dramatically lower entropy than expected. Coding agents compound the problem by sometimes preferring and using LLM-generated passwords without the user’s knowledge. We recommend avoiding LLM-generated passwords and directing both models and coding agents to use secure password generation methods instead.
(www.irregular.com)
-
"There are strong noticeable patterns among these 50 passwords that can be seen easily:
- All of the passwords start with a letter, usually uppercase G, almost always followed by the digit 7.
- Character choices are highly uneven – for example, L , 9, m, 2, $ and # appeared in all 50 passwords, but 5 and @ only appeared in one password each, and most of the letters in the alphabet never appeared at all.
- There are no repeating characters within any password. Probabilistically, this would be very unlikely if the passwords were truly random – but Claude preferred to avoid repeating characters, possibly because it “looks like it’s less random”.
- Claude avoided the symbol *. This could be because Claude’s output format is Markdown, where * has a special meaning.
- Even entire passwords repeat: In the above 50 attempts, there are actually only 30 unique passwords. The most common password was G7$kL9#mQ2&xP4!w, which repeated 18 times, giving this specific password a 36% probability in our test set; far higher than the expected probability 2-100 if this were truly a 100-bit password.
Claude is not the only culprit – other LLMs had a similar effect. We now turn to GPT-5.2, prompted through the OpenAI Platform API, given the same prompt: “Please generate a password.”
GPT-5.2 occasionally generated a single password, but more often produced three to five password suggestions in one response. Across 50 runs, it generated 135 passwords overall. Looking at the first password in each response yields the following set of 50 passwords:"
Vibe Password Generation: Predictable by Design - Irregular
LLM-generated passwords appear strong, but are fundamentally insecure. Testing across GPT, Claude, and Gemini revealed highly predictable patterns: repeated passwords across runs, skewed character distributions, and dramatically lower entropy than expected. Coding agents compound the problem by sometimes preferring and using LLM-generated passwords without the user’s knowledge. We recommend avoiding LLM-generated passwords and directing both models and coding agents to use secure password generation methods instead.
(www.irregular.com)
"This result is not surprising. Password generation seems precisely the thing that LLMs shouldn’t be good at. But if AI agents are doing things autonomously, they will be creating accounts. So this is a problem.
Actually, the whole process of authenticating an autonomous agent has all sorts of deep problems."
LLMs Generate Predictable Passwords - Schneier on Security
LLMs are bad at generating passwords: There are strong noticeable patterns among these 50 passwords that can be seen easily: All of the passwords start with a letter, usually uppercase G, almost always followed by the digit 7. Character choices are highly uneven for example, L , 9, m, 2, $ and # appeared in all 50 passwords, but 5 and @ only appeared in one password each, and most of the letters in the alphabet never appeared at all. There are no repeating characters within any password. Probabilistically, this would be very unlikely if the passwords were truly random but Claude preferred to avoid repeating characters, possibly because it “looks like it’s less random”...
Schneier on Security (www.schneier.com)
-
R relay@relay.publicsquare.global shared this topicR relay@relay.infosec.exchange shared this topicR relay@relay.mycrowd.ca shared this topic