AI is just a bystander here. Someone you can blame if you don't care about accuracy or fairness.
"The DOT says a human being is tasked with reviewing every one of those machine-generated infractions, but the agency declined to say how many employees are dedicated to the human review process. The DOT also declined to say how human reviewers missed the hundreds of erroneous bus lane violations issued along the M79 and Bx35 routes."
So what we have is new software with a bug (perhaps due to a something missing in the spec?) and a rollout of new software where they started with people reviewing every decision in order to catch bugs. The reviewers who missed 800 errors were human, not AI.
The article is very clear that the A.I. did not know the rules which it was supposed to be enforcing. That's by-dictionary-definition incompetence.
In an all-human system, the fact that (say) Quinn in Q.C. kept failing to notice that Peter in Production was installing batteries backwards would not absolve Peter of incompetence. (Though it would point to some additional incompetence, further up the management ladder.)
Try to look at what would happen if it were otherwise — suppose they used the GPS coordinates of the bus lanes instead of an AI-based image recognition thing and left the rest of the system unchanged. GPS is a little jittery, so it would locate some cars 2m from where they really were, producing false positives.
The bad effects came about because of a chain of three mistakes: ① the AI misclassified about 2% of the cases ② the human reviewers didn't catch it and ③ they sent tickets in the test phase. Breaking the chain at any point would prevent the bad outcome. So where is that possible?
At ③ that's trivially possible. If the printer prints a warning letter instead of a ticket, no tickets are delivered.
At ② it's also possible. 2% false classifications is often enough that humans can catch it. Remember that the reviewers don't have to catch all 800, only enough of them to find out that the case "cars parked legally in bus lanes" is mishandled.
At ① … I don't think so. Replacing the AI-based system with a GPS-based system would lead to the same outcome. Replacing it with software that has no bugs in the test phase would work, but that's a pie in the sky.
Blaming the AI is unreasonable when replacing the AI with a realistic alternative wouldn't change the outcome.
Overdue addition needed for https://en.wiktionary.org/wiki/AI#English -
AI is just a bystander here. Someone you can blame if you don't care about accuracy or fairness.
"The DOT says a human being is tasked with reviewing every one of those machine-generated infractions, but the agency declined to say how many employees are dedicated to the human review process. The DOT also declined to say how human reviewers missed the hundreds of erroneous bus lane violations issued along the M79 and Bx35 routes."
So what we have is new software with a bug (perhaps due to a something missing in the spec?) and a rollout of new software where they started with people reviewing every decision in order to catch bugs. The reviewers who missed 800 errors were human, not AI.
The article is very clear that the A.I. did not know the rules which it was supposed to be enforcing. That's by-dictionary-definition incompetence.
In an all-human system, the fact that (say) Quinn in Q.C. kept failing to notice that Peter in Production was installing batteries backwards would not absolve Peter of incompetence. (Though it would point to some additional incompetence, further up the management ladder.)
Does it matter, though?
Try to look at what would happen if it were otherwise — suppose they used the GPS coordinates of the bus lanes instead of an AI-based image recognition thing and left the rest of the system unchanged. GPS is a little jittery, so it would locate some cars 2m from where they really were, producing false positives.
The bad effects came about because of a chain of three mistakes: ① the AI misclassified about 2% of the cases ② the human reviewers didn't catch it and ③ they sent tickets in the test phase. Breaking the chain at any point would prevent the bad outcome. So where is that possible?
At ③ that's trivially possible. If the printer prints a warning letter instead of a ticket, no tickets are delivered.
At ② it's also possible. 2% false classifications is often enough that humans can catch it. Remember that the reviewers don't have to catch all 800, only enough of them to find out that the case "cars parked legally in bus lanes" is mishandled.
At ① … I don't think so. Replacing the AI-based system with a GPS-based system would lead to the same outcome. Replacing it with software that has no bugs in the test phase would work, but that's a pie in the sky.
Blaming the AI is unreasonable when replacing the AI with a realistic alternative wouldn't change the outcome.