Had me wonder - if you ask an LLM for a random number 1...100, what distribution do you get? Surely many have run this experiment. Here's a link that looks like a good example, https://sanand0.github.io/llmrandom/
That is interesting data. Just from looking at those graphs, it looks like AIs are consistently avoidant of the number 69, likely because of safeguards to prevent it from being offensive. Otherwise its training would probably tell it that it was a really nice number.
If anyone is that desperate for a secure random password here's a Perl one-liner I came up with that will generate random cryptographically secure passwords with all unique characters using /dev/urandom. No dependencies:
This asks for a dictionary attack, not of common words, but for tokens from training that have some weight related to good passwords.
At least regarding “normal” text generation, if you tell somewhat to the LLM that generate a Python script to write down a random password and use it it may have better quality.
I mean, people are still rotating <month><year> passwords because they refuse to remember anything. I only know this, because I am in a customer-facing position, and these customers rarely care about revealing their passwords when they need help...
huh, for me it just generates <username>123 when I ask it to generate a password lol, sometimes adds a !, more often it just forces changeme rather than having any password.
I only clicked on the article with no intention of reading it (no time), but rather out of morbid curiosity as to why on earth anybody would need to be told that LLMs should absolutely not be used to generate passwords.
> [...] Despite this, LLM-generated passwords appear in the real world – used by real users, and invisibly chosen by coding agents as part of code development tasks, instead of relying on traditional secure password generation methods.
Jesus F'ing Christ. I hope to have time to read the whole thing later.
The article is a bit of a strawman, and a bit of an advertisement for a security consultancy. If you ask someone else to pick a password for you, then it's a secret known by two people. So don't do that. That was true a thousand* years ago. It's still true today.
*I know, I know, hash functions didn't exist on Earth a thousand years ago. Still true.
I urge you to actually read the article, because it doesn't say anything about the risks of the LLM knowing your password (e.g., stored in server-side logs), it talks about LLMs generating predicatable passwords because they are deterministic pattern-following machines.
While the loss of secrecy between you and the LLM provider is a legitimate risk, the point of the article was that you should only use vetted RNGs to generate passwords, because LLMs will frequently generate identical secure-looking passwords when asked to do so repeatedly, meaning that all a bad actor has to do is collect the most frequent ones and go hunting.
The loss of secrecy between you and the LLM only poses a risk if the LLM logs are compromised, exposing your generated passwords. The harvesting of commonly-generated passwords from LLMs poses a much broader attack surface for anyone who uses this method, because any attacker with access to publicly available LLMs can start mining commonly generated passwords and using them today without having to compromise anything first.
You're right; I could have phrased the issue better, though I certainly did read the article. Let me try again: letting someone else pick a password for you requires you to trust that they did it well, and you get no benefit in exchange for that trust. That's true for other humans, websites, and now LLMs.
Honest question, how much money would I make off an MCP service to generate passwords for claws and agents. Is there still gas left in the griftmobile, are prospectors still in need of shovels, will the gods bless my humble, shameless lunge for my slice of the pie?
No, but if those VCs let their AI agents purchase things on their behalf, you could maybe trick those agents into thinking your cloud service was the better option.
This article has “why stabbing yourself with a screwdriver is bad” vibes.
Yes. It really makes no sense to take a screwdriver instead of a knife.
Had me wonder - if you ask an LLM for a random number 1...100, what distribution do you get? Surely many have run this experiment. Here's a link that looks like a good example, https://sanand0.github.io/llmrandom/
That is interesting data. Just from looking at those graphs, it looks like AIs are consistently avoidant of the number 69, likely because of safeguards to prevent it from being offensive. Otherwise its training would probably tell it that it was a really nice number.
I wonder the human results. If a friend asks you, maybe you say 69, but if it's a psych exam, people might avoid it.
I imagine you'd get a similar distribution as when asking humans to come up with a random number on the spot
If anyone is that desperate for a secure random password here's a Perl one-liner I came up with that will generate random cryptographically secure passwords with all unique characters using /dev/urandom. No dependencies:
Minified:This asks for a dictionary attack, not of common words, but for tokens from training that have some weight related to good passwords.
At least regarding “normal” text generation, if you tell somewhat to the LLM that generate a Python script to write down a random password and use it it may have better quality.
> LLM-generated passwords (generated directly by the LLM, rather than by an agent using a tool)
This seems like kind of a pointless analysis to me? Humans also generate bad passwords. It's why we use crypto-hardened RNG tools.
It’s pointless if you believe no one is asking LLMs to generate passwords for them.
Humans will always smash a screw with the handle of a spoon and be proud of themselves when they manage to do it.
I mean, people are still rotating <month><year> passwords because they refuse to remember anything. I only know this, because I am in a customer-facing position, and these customers rarely care about revealing their passwords when they need help...
huh, for me it just generates <username>123 when I ask it to generate a password lol, sometimes adds a !, more often it just forces changeme rather than having any password.
I only clicked on the article with no intention of reading it (no time), but rather out of morbid curiosity as to why on earth anybody would need to be told that LLMs should absolutely not be used to generate passwords.
> [...] Despite this, LLM-generated passwords appear in the real world – used by real users, and invisibly chosen by coding agents as part of code development tasks, instead of relying on traditional secure password generation methods.
Jesus F'ing Christ. I hope to have time to read the whole thing later.
The article is a bit of a strawman, and a bit of an advertisement for a security consultancy. If you ask someone else to pick a password for you, then it's a secret known by two people. So don't do that. That was true a thousand* years ago. It's still true today.
*I know, I know, hash functions didn't exist on Earth a thousand years ago. Still true.
I urge you to actually read the article, because it doesn't say anything about the risks of the LLM knowing your password (e.g., stored in server-side logs), it talks about LLMs generating predicatable passwords because they are deterministic pattern-following machines.
While the loss of secrecy between you and the LLM provider is a legitimate risk, the point of the article was that you should only use vetted RNGs to generate passwords, because LLMs will frequently generate identical secure-looking passwords when asked to do so repeatedly, meaning that all a bad actor has to do is collect the most frequent ones and go hunting.
The loss of secrecy between you and the LLM only poses a risk if the LLM logs are compromised, exposing your generated passwords. The harvesting of commonly-generated passwords from LLMs poses a much broader attack surface for anyone who uses this method, because any attacker with access to publicly available LLMs can start mining commonly generated passwords and using them today without having to compromise anything first.
You're right; I could have phrased the issue better, though I certainly did read the article. Let me try again: letting someone else pick a password for you requires you to trust that they did it well, and you get no benefit in exchange for that trust. That's true for other humans, websites, and now LLMs.
The article reads like it was written by a machine.
Honest question, how much money would I make off an MCP service to generate passwords for claws and agents. Is there still gas left in the griftmobile, are prospectors still in need of shovels, will the gods bless my humble, shameless lunge for my slice of the pie?
There is a marketplace for free skills (in this case a markdown file saying "run openssl rand -hex 32")
I do not think there is any money for something that trivial.
Even the irrationally exuberant VCs wouldn't put money in that.
No, but if those VCs let their AI agents purchase things on their behalf, you could maybe trick those agents into thinking your cloud service was the better option.
Not much because if you gain any traction, within a day somebody will make a clone and make it free/open source.
This is the default answer for all vibe coded slop business ideas for a while.
why would you LLM generate a password?!?
Obligatory https://xkcd.com/221/
[flagged]
[dead]