LLMs are a liability issue waiting to happen. And this is an early example.
Results based on probability and statistics is not logic --- it is quesswork. And quesswork alone can not be logically defended as being legally responsible in many (if not most) cases --- certainly not in maatters of life or death.
The message here is clear --- you can use AI for decision making, but you better be able to logically justify the results. But producing and documenting the justification nullifies much of the incentive for using AI to start with.
LLMs are a liability issue waiting to happen. And this is an early example.
Results based on probability and statistics is not logic --- it is quesswork. And quesswork alone can not be logically defended as being legally responsible in many (if not most) cases --- certainly not in maatters of life or death.
The message here is clear --- you can use AI for decision making, but you better be able to logically justify the results. But producing and documenting the justification nullifies much of the incentive for using AI to start with.