4 points | by anonhaven 5 hours ago ago
1 comments
It might be my inner Luddite talking, but LLM use in defense and intelligence terrifies me. What happens when built-in model biases or hallucinations affect human safety? Who is to blame and how will this be mitigated? Fascinating but scary.
It might be my inner Luddite talking, but LLM use in defense and intelligence terrifies me. What happens when built-in model biases or hallucinations affect human safety? Who is to blame and how will this be mitigated? Fascinating but scary.