3 points | by danaris 11 hours ago ago
2 comments
I think¹ we will always be unable to secure LLMs from malicious inputs unless we drastically limit the types of inputs LLMs can work with.
¹ https://matthodges.com/posts/2025-08-26-music-to-break-model...
[dead]
I think¹ we will always be unable to secure LLMs from malicious inputs unless we drastically limit the types of inputs LLMs can work with.
¹ https://matthodges.com/posts/2025-08-26-music-to-break-model...
[dead]