>"A handful of users in a private online forum gained access to Mythos on the same day that Anthropic first announced a plan to release the model to a limited number of companies for testing purposes, said the person, who asked not to be named for fear of reprisal. The group has been using Mythos regularly since then, though not for cybersecurity purposes, said the person, who corroborated the account with screenshots and a live demonstration of the model..."
On other news, the AI company recently found to have poor cybersecurity posture, still has poor cybersecurity.
How is this even possible? You'd think you would use a frontier-tier cybersecurity model to secure itself.
Mythos can secure all models that cannot secure themselves. Can Mythos secure itself?
>"A handful of users in a private online forum gained access to Mythos on the same day that Anthropic first announced a plan to release the model to a limited number of companies for testing purposes, said the person, who asked not to be named for fear of reprisal. The group has been using Mythos regularly since then, though not for cybersecurity purposes, said the person, who corroborated the account with screenshots and a live demonstration of the model..."
So… the AI model deemed ‘too dangerous to release’, that the US government is using… had unauthorized users?
I’m starting to think the AI bubble wont so much burst as explode.