> According to the court documents, the Fargo detective working the case then looked at Lipps' social media accounts and Tennessee driver's license photo. In his charging document, the detective wrote that Lipps appeared to be the suspect based on facial features, body type and hairstyle and color.
> Once they were in hand, Fargo police met with him and Lipps at the Cass County jail on Dec. 19. She had already been in jail for more than five months. It was the first time police interviewed her.
How is this the fault of AI? It flagged a possible match. A live human detective confirmed it. And the criminal justice system, for reasons that have nothing to do with AI, let this woman sit in jail for 5 months before doing even interviewing her or doing any due diligence.
There's a reason why we don't let AI autonomously jail people. Instead of scapegoating an AI bogeyman, maybe we should look instead at the professional human-in-the-loop who shirked all responsibility, and a criminal justice system that thinks it is okay to jail people for 5 months before even starting to assess their guilt.
> How is this the fault of AI? It flagged a possible match. A live human detective confirmed it.
Because we're seeing the first instances of what reality looks like with AI in the hands of the average bear. Just like the excuse was "but the computer said it was correct," now we're just shifting to "but the AI said it was correct."
Don't underestimate how much authority and thinking people will delegate to machines. Not to mention the lengths they'll go to weasel out of taking responsibility for a screw up like this (saw another comment in this thread about the Chief of Police stepping down but it being framed as "retirement").
> I feel like I'm going crazy with this narrative.
We're only getting warmed up. There are programmers on HN that will take the output of their favorite AI, paste it and run it. And we're supposed to be the ones that know better.
What do you think an ordinary person is going to do in the presence of something that they can not relate to anything else except for an oracle, assuming they know the term? You put anything in there and out pops this extremely polished looking document, something that looks better than whatever you would put together yourself with a bunch of information on it that contains all kinds of juicy language geared up to make you believe the payload. And it does that in a split second. It's absolutely magical to those in the know, let alone to those that are not.
They're going to fall for it, without a second thought.
And they're going to draw consequences from it that you thought could use a little skepticism. Too late now.
The “I” in “AI” stands for “intelligence”. Cops are using AI facial recognition because it is being sold to them as being smarter and better than what they are currently capable of. Why are we then surprised that they aren’t second-guessing the technology?
Some police departments seem to actively reject candidates that have higher scores on IQ tests. Not that I think IQ test scores and actual intelligence are related but it clearly shows their intended target candidate group.
Police get raises and recognition for closing cases. In general they don't care if you're guilty or not, that's someone else's problem. Same with the detective, same with the DA. The more cases they close they 'tougher they are on crime'.
You're over-selling the minimum level of intelligence in homo sapiens.
What you're stating is your wishful thinking. Don't get me wrong. I'd also like what you say to be true. It very much is not. Quite the opposite, which is why salespeople "work".
The amount of AI bullshit Senior+ level developers just paste to me as truth is astonishing.
As soon as we start to see a pattern of shitty vibe-coded software actually harming people via defects etc. (see: therac-25), I would hope that the conversation is about structural change to mitigate risk in aggregate rather than just punitive consequences for the individual programmers who are "responsible". The latter would be a fantastically stupid response and would do little or nothing to reduce future harm.
all accountability need not be punitive, we can certainly talk about systemic guardrails. What I find disbelief in, is someone saying the Chief of Police saying "We are not going to talk about that today?" is not the biggest scandal, but the AI is.
> someone saying the Chief of Police saying "We are not going to talk about that today?" is not the biggest scandal, but the AI is.
Who is this "someone"? OP's article and the discussion here are absolutely not neglecting the human factors and general
institutional failure that made this possible. But it's also true that without these "AI" tools, it would never have happened.
Yea but this feels like when a Waymo ran over a cat, and a Human driver ran over a toddler and both got the same level of coverage in the media (actually the cat got more follow-up coverage). And I'm supposed to believe both issues are equally important.
No. That's gaslighting, and totally misplaced political activation.
What do you propose we do in the latter situation? The news isn't the value of the life that was (presumably lost). The news is the circumstances that made that loss possible. Human driver was maybe careless, or maybe didn't look. The child safety classes I took emphasized over and over again to look around your car and yard before backing your car out. This is a problem with a known solution that unfortunately still happens despite the best efforts to prevent it.
Waymo hitting a cat is obviously less tragic, but if it can hit a cat, what else can it hit? A toddler? A human? The wall of your kitchen? This is a problem that has no known solution; furthermore, it's a problem that the engineers at Waymo don't seem overly keen on solving quickly.
"Among his accomplishments has been establishing the department’s Real Time Crime Center that leverages technology and data to support officers in responding more effectively to incidents," the city's release said. "Zibolski also prioritized officer wellness initiatives to strengthen mental health resources and resilience within the department. He reinstituted the Traffic Safety Team to focus on roadway safety and proactive enforcement, and ... played an active role in statewide discussions on various issues affecting law enforcement."
From the same article... He spearheaded a push to "leverage technology and data to support officers in responding more effectively to incidents", then that same technology mistakingly ruins a woman's life by passing along a hit to an officer who compared with her FB photos and said "sure, seems right".
The technology seems highly relevant here. Plus, as we've seen in the software world, when a mandate comes from the top to use the shiny new magic AI tools as much as possible, the officer may have felt pressured to make arrests using the new system they paid a bunch of money for instead of second guessing whatever it spits out.
You are right IMO to question why North Dakota police were able to obtain this Tennessean woman in the first place, you’d think something like that should require far more sufficient evidence than facial recognition.
But, then what good is facial recognition for? Would it have been okay for this woman’s life to have been merely invaded because she matched a facial recognition system? Maybe they can just secretly watch you so you’re not consciously aware of being investigated? Should that be our new standard, if a computer thinks you look like a suspect you can be harassed by police in a state you’ve never even been in?
I just don’t see a legitimate way for AI to empower officers here without risking these new harms. That’s why I lean towards blaming the AI tech, rather than historically intractable problems like the reality of law enforcement.
Having a facial recognition match make you a suspect and cause the police to ask you some questions doesn't seem completely unreasonable to me. Investigations can certainly begin with weak forms of evidence (like an anonymous tip), you just require a higher standard of evidence for a search warrant, surveillance, or an arrest. A facial recognition match shouldn't be probable cause for an arrest warrant, but it still might be a useful starting point for a detective looking for actual evidence.
It is absolutely not reasonable to use low-quality photos to decide someone halfway across the country with no history of even leaving their local area is 'a suspect'.
You are exactly correct. Cops cannot be trusted. We spent a lot of time pointing that out in 2020. AI is the least of our problems with policing.
Unfortunately, a lot of people are certain it won't happen to them, and it has been practically impossible to establish any kind of accountability. It has only gotten worse since 2020.
You’re on the right track here but I don’t think it should be hand-waved away as “the least of your problems” - it’s yet another weapon that police in the USA can use against the population with impunity. They’re going to have to reckon with all of this in the coming years - cops having guns and armored cars, “qualified immunity”, the “stop resisting” workaround for brutality and now this AI
You can hold someone responsible only after they've actually fucked up. And with the way things move in the criminal justice system, that can take months to discover. Holding them responsible doesn't really fix anything, it's purely reactive.
Cops are already susceptible to confirmation bias, and for "efficiencies" they are delegating part of their job to apparently magical tools that will only increase their confirmation bias. And because it is for efficiency you can bet they won't be given extra time to validate the results.
What or who is at fault isn't either/or, it's a bunch of compounding factors.
You’re going crazy because up until this exact moment you’ve never had to confront the reality that these tools, placed into the hands of the common man, are viewed as authoritative and lack any accountability or consequence for misuse.
For anyone who has been victimized by law enforcement or governments before, we’ve been warning about this shit for decades. About the lack of consequence for police brutality. The lack of consequence for LPR abuse. The lack of consequence for facial recognition failures and AI mismatches.
You need to understand that by using these systems correctly and holding yourself accountable, you are in the minority. Most people do not think that critically, and are all too happy to finger the computer when things go badly.
And until you accept that, and work to actually hold folks accountable instead of deflecting blame away from the tool, then this won’t actually change.
Do you mean hypothetically could society hold law enforcement personnel accountable for mistakes, bad judgement, flagrant criminal conduct, horrendous abuse of any and everyone? Certainly, a large scale and comprehensive restructuring of America’s law enforcement and prosecutorial system is legally possible.
However, I hold to the opinion that if you are discussing actual reality, based on decades (if not the entire period post civil war, for near certainty) of historical examples and the current “majority” position of the US electorate: there is a nearly unqualified NO. We cannot, or will not, hold law enforcement accountable for even intentional, planned, and malicious conduct in a vast majority of cases. There is practically no accountability at all, and that’s just for thoroughly proven intentional conduct. Bad judgement, alleged mistakes, etc are even less able to result in any action.
The reality of the legislation and precedent ensure it. It’s not a bug, it’s a feature.
It's called qualified immunity. Many support its repeal. I hope you join them, and convey the same to your local representatives and candidates. Until it is reformed few if any officers or administrators of criminal justice in the United States will ever feel any type of accountability.
Short of video evidence of blatant gun to the back of the head style homicide qualified immunity means most law enforcement officials are never held accountable for their miscarriages of justice. Criminal charges against officers are exceedingly rare. She should be able to sue this detective directly. Of course she can sue the government too, and should. But without any personal consequences for the people carrying out these acts, taxpayers will continue to bail out these practices without ever noticing. Your own government should not be a shield for a police officer who has violated you or your neighbors.
There's nothing to repeal. Qualified immunity is a doctrine that the judicial branch made up out of thin air, with no legislative backing.
But agreed, we need legislatures to write laws that expressly hold police accountable, and declare that they are not shielded from liability when things go wrong due to their own failures and negligence.
While the origins of qualified immunity are judicial, some State loved the idea so much the went and made it statutory too. Louisiana’s 2024 bill explicitly removes negligence as an exception (which is a valid method to circumvent qualified immunity based on jurisprudence at the federal and most state levels). Louisiana requires intentional violations or criminal actions to even be able to bring a claim.
> Short of video evidence of blatant gun to the back of the head style homicide qualified immunity means most law enforcement officials are never held accountable for their miscarriages of justice.
I mean, this is the USA we're talking about. Cops are given huge authority over everyone else, with poor accountability. AI just lets them pretend to be even less accountable. And by "pretend" I of course mean "get away with it".
You should tell that to Angela Lipps, I'm sure she told every cop she came in contact with she had never been to Fargo. Cops have a responsibility to do their job, part of that job is listening and relying on proof. ALL those cops were either too lazy or were afraid of their superiors. This is unacceptable for the amount of power and information they have access to. We should either de-fund the police system or reform the hell out of it. BTW, where was her state representative during this fiasco?!?
The belief by a juror that law enforcement personnel, especially phrased as a belief that applies to law enforcement personnel as a generic group, is a well established basis for a challenge for cause leading to exclusion of that person from being a juror. The US jury system is build explicitly on excluding these types of belief in juries in order to ensure fairness, impartiality, and individual and case/witness specificity of “triers-of-fact”.
I could understand someone who disagrees with it, but your position would be antithetical to current and historical thought on what defines a fair jury.
It's not even just incompetence, but malice. "AI says so" is going to be the perfect catch-all excuse for literally everything anyone might want to do that they shouldn't. You know how techbros love to excuse every horrifying outcome of their torment nexi with "don't blame me, the algorithm did it"? It's going to be like that, but now everyone can do it.
It's also why people start parroting the phrase "the purpose of a system is what it does". Look at where we are right now: a precipice before this becomes widely used in all forms of policing. We still have a chance to police the police's use of the AI.
The purpose of using AI to identify suspects in criminal cases is to ease the burden of manual searching for a suspect (or insert whatever the purpose of statement you want). Ok, but we're getting false positives that are damaging people's lives already in the early stages. And I don't want to hear "trust me bro, it will get more accurate" as an excuse to not regulate it.
At a minimum, we should enshrine the right to appeal AI and have limits on how it can be used for probable cause.
This isn't even the only recent case of this happening. There was another case of mistaken identity due to AI. [0] Sure 4 hours isn't the same as 5 months, but still this guy wanted to show multiple forms of ID to prove who he was! The bodycam footage was posted a few months back but never got traction here.
Like if the police officer can't read numbers, they can't do breathalyzer tests on people. If the AI can't be used responsibly, then it can't be used at all.
So what? There were false arrests and convictions made by misuse of line-ups, DNA, eye-witnesses, photos, bloodstains, fingerprints, etc. since forever. You must also blame all those other technologies, so what do you think the police should use to find suspects? In your view, the more help police have, the worse a job they'll do. Is that actually the trend?
> With all other proof you mentioned, there was always a human putting his signature.
There was a human doing that in this case; AI doesn’t inititiate charges. “In his charging document, the detective wrote that Lipps appeared to be the suspect based on facial features, body type and hairstyle and color.”
This woman lost most of her material possessions, was terrorised by "goons"... The police do this stuff regularly, as black people, immigrants, "white trash" etcetera know well. Another opportunity, presented BY AI models for more routine police oppression
AI is, in this case, a tool enabling it, because trawling large databases using AI allows finding people with a degree of similarity to a suspect that would reasonably constitute probable cause int he context of what was until fairly recently the norm for police work because that work relies on proximity and connections to the crime. The understanding of probable cause and what is necessary for it , given the actual investigative process in the case, including the use of large databases unconnected with the events and locality of the crime needs to adapt.
The point that you're missing is that, in a system where such abuses are possible, many of us really don't want one more tool in their box for them to fuck us with.
Like, they already prove themselves incompetent- giving the power to track anyone in the US via a distributed ALPR system just makes them more dangerous. Giving them all these "AI" based tools does the same.
This particular "AI bogeyman" isn't just AI; it's cops with AI and in particular cops with facial recognition tools, dragnet LPR surveillance tools, and all this other new technology that essentially picks somebody's name out of a hat to have their life temporarily (or [semi-]permanently) ruined by shithead cops who won't ever face any real accountability.
This keeps happening, and the reason it keeps happening is that shithead cops have these tools and are using them. Until we can find a reliable way to prevent this from happening, which may or may not be possible, cops who may or may not be shitheads should not have access to these tools.
Yes! This is about why mass surveillance and dragnets and the like are horrible. These all suffer from people not being able to understand the base rate fallacy (https://en.wikipedia.org/wiki/Base_rate_fallacy)
Even if AI facial recognition gets really really good, and is 99.999% accurate, if you use it in this way you are going to arrest more innocent people than guilty people.
If you find a suspect, who has a lot of evidence pointing to them being the criminal and you run a test that is 99.999% accurate and it tells you they are guilty, they are probably guilty.
But if you take that same test and run it against the entire population of the country, it is going to find 3500 people that match with "99.999% certainty" That gives you a 0.02% of the person being guilty.
People don't think like this, though, so they think the person must be guilty.
They don't seem to give a single iota of a fuck about that when a private regular person has their money stolen or their car totaled by hit and run driver. Finding some innocent person to arrest would indicate they are at least pretending to give a fuck, yet they seem to only be bothered to even keep up appearances when it is the bank being robbed.
Sorry, I disagree. This is an example of the corruption inside the American legal system. The cops are at the level of us regulars, and their judgement and actions seem to have no supervision or accountability.
It's not just the shithead cops, it's the voters. All the "Blue Lives Matter", "thin blue line", "back the blue" propaganda works towards giving police infinite powers with zero accountability. This is what voters want and they've said so loudly over and over again.
Let me help you out with this comprehension issue. The point of my comment is that I disagree with the apparent premise of the comment I replied to, which is that "AI" is some generic investigative tool that we can neatly snip out of the picture to blame this incident on human factors at the individual level ("the professional human-in-the-loop who shirked all responsibility"). Said comment also implies that people are fixating on the AI aspect of this issue while ignoring the human factors, which IMO is a strawman. To me, the existence of AI in its current incarnations and the ways in which law enforcement will inevitably abuse it are, together, inseparably, the problem. AI (in the most general sense) opens up entire new dimensions for potential abuse.
As a concrete example:
> And the criminal justice system, for reasons that have nothing to do with AI, let this woman sit in jail for 5 months before doing even interviewing her or doing any due diligence.
Let me state what should be obvious: without AI (as in, the facial recognition systems involved in this case), this woman would not have sat in jail for 5 months, or indeed for any length of time at all. So saying that it has "nothing to do with AI" is totally ridiculous.
> Let me state what should be obvious: without AI (as in, the facial recognition systems involved in this case), this woman would not have sat in jail for 5 months, or indeed for any length of time at all.
How do you arrive at that conclusion? Because it happened, and it wasn't an AI overseeing (the lack of) due process. The police identifying suspects is part of their job. So are arrest warrants and all the rest of it. I honestly don't see what AI had to do with anything here. All I see is a gaping systemic issue that could have happened regardless of AI if the wrong person got the wrong idea or had a personal vendetta.
Suppose ICE busts down someone's door, drags them off, holds them in an internment camp for months, and then finally goes "oh, oops, guess you were a citizen all along sorry about that" and releases them. We don't blame the source of their faulty hit list. We blame the systemic practices and legal apparatus that permitted it all to happen in the first place.
You might as well blame the SUV manufacturer because without vehicles the police wouldn't hav been able to drive over to make the arrest, right?
Because it's beyond obvious? How would this woman have ended up in jail if she hadn't been misidentified by the facial recognition software in use by the Fargo police? She lives 3 states over; would be a hell of a coincidence if some other avenue of investigation led them to her.
> I honestly don't see what AI had to do with anything here.
You honestly don't see what facial recognition software had to do with a woman being misidentified by facial recognition software?
> Suppose ICE busts down someone's door, drags them off, holds them in an internment camp for months, and then finally goes "oh, oops, guess you were a citizen all along sorry about that" and releases them. We don't blame the source of their faulty hit list.
I actually am completely willing to blame any entity that supplies ICE with the names of people it can reasonably assume will be targeted for "enforcement action" due to said entity representing said names as being legitimate targets for said enforcement action, without taking reasonable care to ensure said representation is correct in each and every case.
What you don't seem to understand is that these abuses of law enforcement authority are predicated on at least an appearance of legitimacy, which can be provided by (e.g.) an app with (presumably) a very official looking logo that agents can point at somebody to get a 'CITIZEN' or 'NOT CITIZEN' classification. It is upon this kind of basis that they perform illegal arrests. All parties—the app vendor and ICE, as well as the people who are meant to be overseeing ICE and providing accountability—are complicit enablers in these crimes. To absolve the vendors who provide the software knowing full well what it will be used for, what its limitations are, and how unlikely it is that ICE personnel will understand those limitations and work around them to keep everything legal, is totally absurd.
It isn't obvious, no. If I drop a hammer on my foot and break my toe I can't then blame the hardware store or the manufacturer. If the store didn't carry hammers I wouldn't have been able to purchase it, I think to myself. Then I couldn't possibly have dropped it on my foot, thus my toe wouldn't be broken right now. It's a specious line of reasoning.
It doesn't matter in the slightest by what means she was selected to "win" this particular lottery. The tool rolling the dice isn't to blame. Tools (and people!) will occasionally return spurious results. Any system needs to be set up to deal with that.
So no, I honestly don't see what facial recognition software has to do with gross negligence and process failure on the part of multiple government agencies.
> without taking reasonable care to ensure said representation is correct in each and every case.
Only if that was part of the contract. Was the product delivered according to specification or not?
What if ICE used FOSS tools to put together the list themselves? Are the tools still to blame? That would obviously be absurd.
The only way the provider (never the tool) could be at fault would be something such as willful negligence or knowingly and intentionally attempting to manipulate the user's actions to some end.
What you don't seem to understand is that human negligence can't be foisted off on tools. Of course an abuser will try to play his actions off as legitimate. That isn't the fault of the tool, it's the fault of the abuser. It isn't up to an app to determine the legitimacy of LEO agent actions. Neither is it the responsibility of an arbitrary, fungible government contractor to oversee ICE.
I think you're confusing the morality of participating in a broader ecosystem with moral culpability for the process failure associated with a specific event. You can advance a reasonable argument that AI companies that choose to do business with ICE are making an at least moderately immoral decision. However that doesn't place them at fault for the specific process failures of any particular event that happens.
If you don't agree that facial recognition software is involved in a case of a woman being misidentified by facial recognition software then there is no point in me spending any more time/effort in conversation with you. Goodbye.
You seem to be intentionally ignoring the point I made. I never disputed that facial recognition software was used (ie involved).
The facial recognition tool didn't arrest her. It holds no authority, has no will of its own, and does not possess a corporeal form with which to enact change in the world. The only parties that could possibly be at fault here are various government agents who clearly acted with negligence, failing to uphold their duty to the law and the people.
If you're unable to rebut my point then perhaps you should consider that you might be in the wrong? If you're unwilling to entertain such a possibility then I wonder why you're posting here to begin with. What is your goal?
> This particular "AI bogeyman" isn't just AI; it's cops with AI
You can’t separate the thing from how it will be used. It’s like arguing that cars on their own aren’t particularly dangerous, but the point of buying a car is to use it thus risking the general public.
But you can in fact argue exactly that. If (arbitrary example) pedestrians are being killed due to poor road engineering practices it isn't reasonable to point at cars and say "see those are the root problem" when in fact it's due to a willful lack of sidewalks or marked crossings or whatever. Being adjacent to something bad doesn't equate to being the root cause.
History shows the timeline of dependence here. Before the introduction of cars, “poor road engineering practices” wouldn’t result in those deaths. So clearly it’s cars that are necessitating sidewalks, etc.
Same deal here, if something “becomes a problem” because of the introduction of AI, it’s AI that is the root case of the resulting issues. Many people are tempted to argue that flawed humans can’t implement the perfect system that is Anarchy, Communism, Recycling programs, or whatever but treating systems as needing to operate on the real world is productive where complaining about humans isn’t.
Well I (thought it was obvious that) I was referring to roads constructed relatively recently. If cars necessitate sidewalks and the city chooses to cut costs by not putting those in that isn't the fault of automobile designers or manufacturers or dealers or private owners or whoever.
To your example, technology changes and that necessitates infrastructure changing. That doesn't mean that fault for mishaps in the meantime can be attributed to the new technology. A user operating the new technology in an obviously unsafe manner is solely at fault for his own negligence.
The safest street designs still result in automobile fatalities. You can at best mitigate the issue with better street designs but not address the underlying issue.
Failing to acknowledge cars as the root cause may be comforting, but it blinds you to viable solutions.
Indoor shopping malls for example solve many of the issues with cars by forcing people to move around on foot in a little island surrounded by a sea of very low density parking. They are’t perfect solutions, but they still saved a lot of lives and time.
Saying people are misusing a new technology is just another way of saying that technology is flawed. This doesn’t mean you can’t utilize it, but pretending flaws don’t exist has no value.
At this point I think that AI will perform human duties better than human. So probably it's better to let AI autonomously jail people, of course with all the necessary procedures as required by law.
Devils advocate: what if a facial recognition system with a large enough database can always find an unrelated/innocent person that looks similar enough to convince the human?
Reminds me of a case that just popped up in my neck of the woods.
Man gets pulled over on an expired plate. They search based on this fact, find a pill bottle (for Irritable Bowel Syndrome) and magically find he’s trafficking cocaine and fentanyl.
I've always maintained one of the worst things that can happen to you is sitting in court before a jury of your peers, because most can't comprehend the meaning of the law outside of their feelings. NOW the worst thing is having yourself in the hands of cops who just don't give a damn or became a cop for the use of power.
AI is being used by bureaucrats and enforcers to justify lazy, harmful conclusions. You don't live in the real world if you think "just punish the bureaucrats, don't make it about AI" is going to remotely rectify this toxic feedback loop and ecosystem.
No, we definitely should punish bureaucrats and enforcers who act negligently. If someone in a position of authority flagrantly fails to do his job and it directly harms someone he should be held accountable. That would provide a strong incentive for future actors to take their responsibilities seriously.
If an engineer signs off on an obviously faulty building plan and people die as a result we hold him accountable. This is no different.
It's not. This is just an acceleration in the unraveling of society facilitated by AI. As someone whose childhood included so many "robots will kill humans" books and movies, I am flabbergasted that the AI apocalypse will be dumb humans overtrusting faulty AI in important matters until everything falls apart.
Most humans cannot distinguish AI from actual intelligence. When you combine that with bureaucrats innate tendency to say, "Computer said so," you end up with bizarre situations like this. If a person had made this facial match, another human would have relentlessly jeered him. Since a computer running AI did it, no one even cared to think about it.
Computers are wildly dangerous, not because of anything innate but because of how humans act around them.
> It's not. This is just an acceleration in the unraveling of society facilitated by AI. As someone whose childhood included so many "robots will kill humans" books and movies, I am flabbergasted that the AI apocalypse will be dumb humans overtrusting faulty AI in important matters until everything falls apart.
This is literally the plot of most of those books and the way they differ is in how everything falls apart. In some of them the AI supplants us entirely and kills us all. In others it gets taught to kill us all. In others it gets really good at giving us what we ask for until everything falls apart. But it’s taken as a given that unless we change something innate in our culture AI will be our downfall.
> If a person had made this facial match, another human would have relentlessly jeered him.
The glaringly obvious problem here is that our justice system should not be constructed in such a way so as to be reliant on someone's coworker shaming him. That is not a sensible check against a systemic failure. We're supposed to have due process. If someone skips or otherwise subverts due process the justifications don't matter. The root issue is that due process was skipped. Why was that even possible to begin with?
Automation has a strong tendency to degrade diligence.
I see this all the time in operational / production settings. Having a loop with automation reviewed and approved by a human degrades very fast. I only approve automation that has a quick path to unsupervised operation.
100% 100% 100%
humanity is so obsessed with ai that we're losing...our humanity. "blame the mindless, soulless robots! how could we have possibly known that they need to be supervised?! aren't they basically just humans that don't need to rest or eat?"
It isn't, the article doesn't claim (or even imply) that it is "the fault" of AI, only that AI was part of the chain of events, and nothing is the fault of AI until AI is sufficiently advanced to constitute a moral actor. “At the source of every error which is blamed on the computer, you will find at least two human errors, one of which is the error of blaming it on the computer” remains true.
OTOH, it is potentially the fault of the reliance human actors put on an AI determination.
It's the fault of the tool because our society treats the tools as superior judgements than humans and to be trusted completely as a means of deflecting accountability - something any and every minority group has been warning about for fucking decades.
The reason everyone rushes to defend the tool's use is because holding humans accountable would mean throwing these tools out entirely in most cases, due to internal human biases and a decline in basic critical and cognitive thinking skills. The marketing has been the same since the 80s: the tool is superior (until it isn't), the tool shall be trusted completely (until it fails), the tool cannot make mistakes (until it does).
If folks actually listened to the victims of this shit, companies like Flock and Palantir would be gutted and their founders barred from any sort of office of responsibility, at minimum. The fact so many deflect blame from the tool like the marketing manual demands shows they don't actually give a shit about the humans wrapped up in the harms, or the misuse and misappropriation of these tools by persons wholly unaccountable under the law, but only about defending a shiny thing they personally like.
>rushes to defend the tool's use is because holding humans accountable would mean throwing these tools out entirely in most cases, due to internal human biases and a decline in basic critical and cognitive thinking skill
The magical past where people had critical thinking skills never existed. We put a lot of trust in tools is because people are unfucking reliable. Hence why in most cases actual physical evidence does a far better job than witness testimony.
This said, people are lazy. It is one of our greatest and worst traits. When we are allowed to be lazy, especially with tools bad things happen.
This was not a series of errors, this is (as a statistical inference) the system working as designed. This is not uncommon, it is not unplanned. The extradition of suspects from State to State is designed legislatively to function this way.
I also think there is more nuance to this situation than AI bad // Human Bad :: choose one. But while a tragedy, the ‘correct’ functioning of a system that produces tragedy doesn’t make that function and error.
I agree, but our system doesn’t value things that way. Texas, which is one of the highest paying States for cases where intentional, fraudulent, or grossly negligent actions result in wrongful incarceration pay $80,000 dollars per year a person is locked up. But the caveat is that time only starts counting after you are sentenced, so wouldn’t even apply in TFA’s case.
It could be the fault of the company that's selling this service. They often make wildly inaccurate claims about the utility and accuracy of their systems. [0]
> There's a reason why we don't let AI autonomously jail people.
Yes we do. [1]
> and a criminal justice system that thinks it is okay to jail people for 5 months before even starting to assess their guilt.
Her guilt was assessed. That's why she had no bail. It assessed it incorrectly, but the error is more complicated than your reaction implies.
To clarify one point, her not having a bail is a function of the way interstate ‘fugitive’ warrants are designed. The Court in Tennessee had no ability to set bail, and until she entered the physical custody of North Dakota she can not have bail set.
Also, her guilt was not assessed in any common meaning of the term. The requirement for holding a person in custody, with or without bail, is probable cause. The only thing assessed was did law enforcement present a statement to a Judge that was possible to be believed in the light most favorable to the prosecution.
Humans being human. Getting lazy, being incompetent, getting incompetent with AI use or simply being biased. The wrongfully arrested person doesn't even resamble the perpetrator.
Maybe if they were held accountable forthese actions, they would act responsibly?
There's no way this isn't a slam dunk case to sue the piss out of the Fargo Police, probably the US Marshals and maybe other orgs. The woman in the surveillance phone clearly looks way younger, among the many other obvious signs this woman didn't do it. I hope she wrings at least several million dollars out of the government.
It literally doesn't matter -- you're focused on the wrong thing. She could be that woman's exact twin and it wouldn't matter. Spending six months in jail and losing your house, your car, and your dog with the flimsiest of evidence is ridiculous.
'you can beat the wrap but not the ride' has been a pop culture reference in the US since the 1940s. Our society wants/supports the ability for this to be inflicted at police/court whim on people.
A lawsuit is exactly what matters. They learn only the hard way, and no other way. If you want them to not be ridiculous, a lawsuit with large punitive damages is the only practical way to get there.
I disagree. The city or state gets sued and they pay the result from the taxpayer funds and literally nobody learns anything, especially not the hard way. Everyone is so completely divorced, and in some cases immune, from consequences that this will change nothing.
After a couple million dollar lawsuits the city or state will learn to be more careful with their methods. It's the taxpayer funds, but it's not an endless supply of money. Cities and states have their own budgets.
I haven't lived there in years, nor do I have exact numbers, but they make national news enough for the same problem nearly every year. I'll drop you some links if you care.
> The region's GDP is 100 billion dollars, so these are tiny amounts, although they may seem large to some.
It's a fair point and easy to handwave away "it's only $100 per resident." But it's a lot of money still. And yet that city is shutting down schools and selling off school properties to make budget this year. I bet they'd love to have those wasted millions.
> You think they can safely 10x that?
I have no idea the reason for this question. The OP said cities learn after a couple million dollar suits. I'm showing that no, they do not. If anything suits are increasing.
You can be arrested, indicted, and held in jail on pretrial, and there is literally no recourse. There are many other ways jail can happen without due process. Where I live:
* Civil contempt. Absolutely immunity. No due process. Record is about 16 years. Having a bad day? Judge can toss you in jail.
* "Dangerous." Half a year. No due process. He-said she-said.
* "Insane." Psychiatric hold. Three days. Due process on paper, not in practice. Police in my town can and do use this if they don't like you.
Absolutely no recourse. You come out with a gap in income, employment, and, if you missed rent/mortgage, no home. Landlords will simply throw your stuff away too.
You're also basically damned if things do move forward, since from jail, you have no access to evidence, to internet (for legal research), and no reasonable way to recruit a lawyer (and, for most people, pay for one).
Can happen to anyone. Less common if you're rich and can afford a good lawyer, but far from uncommon.
The GP seems to be suggesting that there's no recourse at all, usually. You might bring suit against a police department or LE agency, but you'll fail to find any relief there. True that qualified immunity only protects individuals, but there's a raft of other things that makes it hard to get a judgement against a police department, too.
I think there's probably one major exception: civil rights violation investigations. But even then, the people doing the investigating seem to be biased toward the LEOs.
The GP's linked article doesn't seem to even talk about this, so not sure why that's there.
> You might bring suit against a police department or LE agency, but you'll fail to find any relief there.
I don't know if I'd go so far to say she won't find any relief, but it probably still could be a pretty tough Monell claim against the department (although it's hard to tell from the sparse details in the article):
"[A] local government may not be sued under [42 U.S.C.] § 1983 for an injury inflicted solely by its employees or agents. Instead, it is when execution of a government's policy or custom, whether made by its lawmakers or by those whose edicts or acts may fairly be said to represent official policy, inflicts the injury that the government, as an entity, is responsible under § 1983." [1]
I could see a problem if there was a policy/custom of relying on AI facial recognition alone without any other corroborating evidence (would be a really stupid practice, but I'm sure stupider things have become part of a police department's systemic practices). Or if there was a failure to sufficiently train detectives about the erroneous tendencies of this technology. Maybe the needlessly prolonged detention without bail could be an issue if there was a lack of adequate protocols to expedite in a reasonable amount of time.
Either way, still seems hard to say this a slam dunk case for her, unfortunately. But also seems too risky for the city of Fargo to not settle, at least nominally.
>* "Insane." Psychiatric hold. Three days. Due process on paper, not in practice. Police in my town can and do use this if they don't like you.
A friend of mine was committed longer than 3 days without council or the ability to represent themselves in the hearing. Apparently the whole process of being committed is ex parte in practice in some states.
This is a bit hyperbolic and the exaggerations really undermine what I think is your broader point (that there is rarely recourse when you're held for short to moderate amounts of time). It is hard for me to believe that someone was held for 16 years on civil contempt without due process or that someone was held for half a year without due process after being deemed dangerous. The reason that is hard for me to believe is that the due process is implicit in the action you describe. Civil contempt is from a judge which implies that you're already in court - that's due process. Someone being labeled "dangerous" implies that a finding was made by a neutral party - that's due process.
Just because you disagree with the outcome doesn't mean that due process wasn't given.
Yeah it's "due process." In civil contempt the judge is a witness and prosecutor in the very "process" they're judging. That's the most perverted form of due process imaginable.
A judge should have to recuse themselves if they are acting as witness to the supposed infraction.
Civil contempt isn't some roving criminal charge that jumps out of the jury box randomly. It's meant to make somebody comply with a court order. Anybody in civil contempt holds the keys to the jailhouse door in their own hands, all they have to do is comply.
This statement should make you uncomfortable. It makes me uncomfortable because it is a pure expression of the power of the state. But it's still due process.
In Criminal Contempt max duration of imprisonment is limited. In civil it is not until somebody decides that one never complies. You may call it due process. I call it for what it is - A torture and fucking crime against humanity. Judge that holds person for years for being stubborn deserves nothing more than walk the plank
Qualified immunity doesn't apply to criminal cases. It is used to defend against civil suits. It is unfortunately very easy to find many cases where it leads to injustice. For example:
>...Abby Tiscareno, a licensed daycare provider in Utah, was wrongfully convicted of felony child abuse when a child under her care suffered brain hemorrhaging. After calling emergency services, subsequent medical tests supported these findings. However, during her trial, requested medical records from the Utah Division of Child and Family Services (DCFS) were not provided. It wasn’t until a civil suit that Ms. Tiscareno saw pathology reports suggesting the injury could have occurred outside of her care. She was granted a new trial and acquitted. Her subsequent lawsuit for due process violations, alleging that DCFS failed to provide exculpatory evidence, was dismissed due to lack of precedent indicating DCFS’s obligation to produce such evidence.
Off of taxpayer money sadly. Imo we really need a fix for this. When cops are grossly negligent the money should come out of their aggregate pension fund (or at least partially).
> we really need a fix for this. When cops are grossly negligent the money should come out of their aggregate pension fund
This is on us as voters. If we didn’t piss our pants every time a police union sneezed, we’d realize wholesale restarting police departments is precedents in even our largest cities.
Yes, this is the key point. Tax payers get a nice big bill while the people who caused the problem get a nice paid vacation while they conduct an internal "investigation" that typically finds they did nothing wrong.
Yeah, of course they need to held accountable, and we need to vote in people who will do so. What I'm suggesting is an alignment of incentives that will ensure that police will try to do their best to not be negligent.
Of course there's a balance that has to be struck so that police are empowered enough to act. So perhaps something like settlements against the police being 30% borne by the police pension fund and 70% by taxpayers is sufficient. I think this will also make police very enthusiastic about bodycams and holding each other accountable.
I'm usually a big supporter of labor unions, but police unions in the US generally have an outsized amount of power, and even when mayors etc. want to hold police accountable, the union ends up bending the mayor over a barrel.
I'm not sure what the solution is here. Forbid police from unionizing? That would probably have some bad consequences too.
despite this being something practically everybody wants, the fact that it hasn't happened is not a coincidence and speaks to the power of police unions/guilds and their lobbying arms. outside a few toothless instances, those groups are extremely good at reframing these attempts and mobilizing their bases to vote against the broader public interest.
> despite this being something practically everybody wants,
No, everybody does not want police accountability. Half the population will fall on a grenade to prevent that. They know that the purpose of the police is to keep the undesirables in line, and they never envision that they will ever fall in that category.
oh, i generally don't disagree with you on that point; i specifically meant that when presented with the question "do you want your tax dollars to pay for police liabilities?" the answer is probably almost always "no".
Sure. But when you ask "Do you want the police to be unable to do their job and live in a lawless hellscape ran by gangbangers and ISIS cartels?, the answer is also 'No.'
The problem is that the mass media sets the framing of acceptable discourse, and that mass media is in large part an ideological monoculture. And even when it's not, it is happy to present absolutely insane batshit lunacy as 'one of the two sides' of an issue.
Almost all taxpayer funded pension funds are already underfunded. It makes no difference if the funding decreases or increases, the government employee will still get their benefit. The government would have to go through bankruptcy to get the benefit amount reduced.
imho the US Marshals are the only innocent party here, as my understanding is they don't do investigations and just serve warrants without any knowledge of the underlying case.
People famously do not learn from the experiences of others. It's a big reason why life is so hard when you'd expect it to be pretty easy based on our collective experience.
It's pretty common when a dog is abandoned. Likely her children couldn't afford to care for it. I suppose there is a chance they put it up for adoption (same outcome is likely).
No large scale orgs that I know of. Our local bar has an attorney who does work against it, she has her number at the jail where other inmates will pass her number around if some mentions their dog, and intake officers will often suggest to inmates that if they have pets to call her. She is absolutely the most hard core lover of dogs I have ever met, and she will literally drive/run into danger to get to a canine to get it to a local non-kill shelter.
I can almost guarantee that the Fargo DA’s office had zero idea this happened and had never heard about this investigation before the news story. At this stage in a “case” it’s completely on law enforcement and there is no involvement by a DA’s office for the arrest warrant or the extradition order and warrant that led to this situation.
Not a fan of DA’s offices in general (they are the “evil twin” to my particular line of work after all), but realistically this one isn’t on them.
It is an AI error, but also an error on the part of the cops, the prosecutors, the judge, and the county sheriff (who is responsible for the jail inmates). I hope everyone involved in this travesty is sued into oblivion and unable to hide behind their immunity defenses. Facial recognition should never be the sole basis for a warrant.
> It is an AI error, but also an error on the part of the cops, the prosecutors, the judge, and the county sheriff
Yes, it's critical to remember that multiple parties can be at fault. In a case like this, it is true that
a) law enforcement misused a tool and demonstrated extreme negligence
b) the judiciary didn't catch this, which suggests systemic negligence there too when it comes to their oversight responsibilities
c) the company selling/providing this AI tool should have known it was likely to be misused and is responsible for damages caused by such predictable usage
We cannot have a just world until our laws and norms result in loss of jobs and legitimacy as punishment for this sort of normalized failure, from all three parties. Immunity is a failed experiment.
Even if she was a read ringer (clearly not the same person to any human who glances at the image), common sense should tell you that among 340,000,000 Americans there are a lot of lookalikes. Clearly there's a kind of stupid belief in the mystic powers of an AI and a callous disregard for the well being of suspects. No one should be dragged 1000 miles and held for months based on a facial match, especially when exculpatory evidence was easily available.
To be specific, and it is a lot of the reason why this 5 month delay happened, but she was not dragged then held, she was arrested, then held, then dragged. She was released 5 days after finally getting to Dakota, if they had actually gone and gotten her the hold would have been ~30 days plus the 5 prior to interview and charges dropped.
It isn’t much of a salve, but the particulars do matter when trying to assess fault to the proper parties (who are still clearly the Fargo cops in this particular tragedy).
Doesn't look like it. I've come across this account a few times now. Engages and makes reasonable comments excepts for certain politicized issues where he acts like an indoctrinated zealot.
The software identified the person as Angela Lipps. According to the court documents, the Fargo detective working the case then looked at Lipps' social media accounts and Tennessee driver's license photo.
In his charging document, the detective wrote that Lipps appeared to be the suspect based on facial features, body type and hairstyle and color.
The software worked exactly as intended. It's a filtering tool that sifts through data for common patterns to provide leads, not matches. It raises a flag on persons of interest. You can be a "match" anywhere between 0 and 100% and only relative to some specific input (like that picture taken from the top of the woman at the teller). In that sens mismatches are within acceptable parameters and have been known to happen.
A "match" is a pronouncement ultimately made by the humans that uses the tool, after they've checked out the leads. Someone slept at the wheel here.
Wow, so many failures of the legal system. While the incompetent/malicious/lazy investigators that used the facial recognition and only that are obviously at major fault, I'd actually put larger blame on the judge that signed the arrest warrant. They are supposed to be a check on such incompetent/malicious/lazy-ness not just a rubber stamp. Unfortunately there's really no recourse against incompetent/malicious/lazy judges.
Of course this would have been bad enough if this had happened where she lived but the holding for 5 months adds a whole 'nother level of insight into brokeness of the legal system. I'd be interested in hearing more about why that happened. Was it just a matter of that happens sometimes if you have a public defender?
I followed the inquiry when it was ongoing — all of the depositions were live on YouTube. The level of both hubris and incompetence involved in that case was breathtaking.
John Bryant, aka The Civil Rights Lawyer, recently did a piece about a similar case of mistaken identity. The consequences weren't as severe, but the willingness to trust the AI over any other evidence was the same:
In the video, it shows a police officer blindly trusting a casino's AI software, even when a cursory investigation should have given any reasonable person enough of a reason to question whether the man he arrested was the same man accused of a crime. (And then even after it was confirmed he was not, the prosecutor continued to charge him for trespassing!)
Out of curiosity, was the guy known for being fast and loose with the rules? Put more simply, was he a good cop? Or did he have a history of being a rogue.
lazy stupid pigs should be accountable for misusing AI like this and calling people into a system like that based on some AI's whim and a facebook peek, but having done no actual investigative work.
Lets see the pig that called for her arrest and wasted 4 months of her life spend 4 months in jail.
I really, really need folks to understand that deflecting blame away from the tool and trying to hold the human accountable feeds right into the marketing playbook of these companies in the first place.
The cops cannot be held accountable because the laws basically give them immunity. The politicians cannot be held accountable beyond being tossed out at the next election, because the laws otherwise give them immunity. The people operating the system cannot be held accountable, because the systems are marketed as authoritative despite being black boxes and lacking in transparency; they trusted the system just as they were told to, and thus cannot be held accountable.
And so when every human in the chain cannot be held accountable for these things, and the law prevents victims from receiving apologies, let alone recourse, then the tool and its maker is the only thing we can hold accountable. By deflecting blame away from the tools ("it wasn't AI, it was facial recognition"; "the human had to sign off on it"; "humans made the arrest, not machines"), you're protecting quite literally the only possible entity that could still potentially be held accountable: the dipshits making these stupid things and marketing them as superior and authoritative when compared to humans.
You want accountability? Start holding capital to account, and this shit falls away real fucking fast. Don't get lost in technical nuance over very real human issues.
I disagree. If you focus on holding the software creators to account in lieu of the humans in the loop, the we only reinforce the behavior of offloading thinking to the system.
If I am a cop in another jurisdiction and I see that in this case of error, the facial recognition company was held to account but not the police or municipality, I will be more likely to blindly trust the software assuming that they either patched it or will take responsibility.
You forgot one: capital cannot be held accountable for making a tool used in a crime. It is a simple generalization of the Protection of Lawful Commerce in Arms Act (PLCAA), passed in 2005, which largely bars civil lawsuits against gun makers and sellers when their products are later used in crime.
Is there anything to suggest this sort of injustice isn't happening in low-tech all the time, constantly, all over the country, and the only reason it's getting attention here is because AI is involved?
>Unable to pay her bills from jail, she lost her home, her car and even her dog.
Fargo police say the bank fraud case is still under investigation and no arrests have been made.
Except in "Brazil" it was a mechanical error in a deterministic machine caused by an invasive outside actor. It would be reasonable to trust that the autotypewriter/printer would faithfully output the correct text.
Modern AI seems incapable of any respectable amount of accuracy or precision. Trusting that to destroy somebody's life is even more farcical than the oppressive police in "Brazil".
Gofundme? This woman needs some $$ and a lawyer. She may not know it yet, but if she makes some smart moves, she's about to be rich and Fargo is about to learn a very hard lesson.
This problem predates modern AI. https://en.wikipedia.org/wiki/Computer_says_no is built upon the deliberate abdication of responsibility to processes that cannot be held accountable. AI is just letting them do it at scale.
That doesn't mean we should accept it from AI. We should fight the blind yielding to the facade of authority regardless of whether the decision was made by an AI or an insect landing on a teleprinter at the wrong time.
Just reading the headline I said to myself: bet this is in America.
Every time I see something like this I can never quite believe this sort of stuff happens. Complete, life ruining incompetence, with no consequences for the idiots that caused this to happen. Ignoring the AI input, which to me has nothing to do with this (it was used as a tool to identify a potential suspect), this woman went to jail for 5 months on the opinion of someone with no other evidence. Only in America.
I wish we saw more invocations of speedy trial rights. Trials MUST begin for felony charges in ND within 90 days of a defendant invoking those rights (must be invoked within 14 days of arraignment)[0].
Defendants don't invoke that because in most states and federally, they build the case against you slowly over a long period of time before arrest, then stall as long as possible for discovery, then when they finally fulfill discovery they overwhelm you with a bunch of useless stuff so that it takes forever to get the useful information. Invoking right to speedy trial means the prosecution gets a very strong advantage to the defense.
I don't think it's a sensible interpretation of the constitution given the massive asymmetry of the situation. The state should be obligated without exception to either provide for a speedy trial or to release the defendant while the state figures its shit out. It should not be a right that can be waived. Meanwhile a defendant who's been arrested should generally be given as much time as he'd like to put together his defense.
Something big is missing from this story. How did face ID in ND pick up a matching little old grandma in TN that a TN judge would hold her without bail for 5 months?
It’s obvious from the one photo they posted of the actual suspect that the lady they arrested is about 20-30 years older than the woman in the bank photo. The woman in the photo is maybe 25-30 years old, this grandma looks like she’s 65-70 (actual age of 50).
Absolutely ridiculous, I hope she wins her civil case.
People will defend this, too, saying “well, she was eventually exonerated, right? So the system works!” Ignoring how she’ll never be fully reimbursed for the time, money, and grief of going through the system.
We also need to question how many people might go through the same process without eventual exoneration and how much going through this process costs individuals. Being falsely prosecuted comes usually imparts a permanent black mark in search results about the person (outside of places with sane laws like the EU) as well as causing stress or permanent injury.
Wrongly arrested individuals with mental disabilities have a history of physical abuse in jail potentially to the point of death.
You must have been reading something else, because this article includes all of that information.
> In Tennessee, she was given a court appointed lawyer for the extradition process. To fight the charges, she was told she would have to go to North Dakota.
> Officers from North Dakota did not pick up Lipps from her jail cell in Tennessee until Oct. 30 — 108 days after her arrest. The next day she made her first appearance in a North Dakota courtroom to fight the charges.
> "If the only thing you have is facial recognition, I might want to dig a little deeper," said Jay Greenwood, the lawyer representing Lipps in North Dakota.
I read the article and I don’t really understand… she was held in a jail in Tennessee but the article states they flew her to North Dakota? And somehow she’s a fugitive so that’s why she doesn’t get bail? but she’s a fugitive held in her own state in a holding facility? But then when they release her, she’s in North Dakota? So if some state says you’re a fugitive your home state will just hold you in jail until they come and put you on an airplane? Is that correct?
I think you have the interpretation correct. It seems like any state can say you're a fugitive from their state and now you have even fewer rights. Every day I learn some new fact about "justice" in the United States.
I believe each state has its own extradition process. In this scenario think of them more like the countries in the EU. Apparently Tennessee doesn't adequately protect its residents.
As a Tennessee resident I don't love learning that some dumb fuck state I want nothing to do with can call me a fugitive and my state will hold me prisoner without trial when said dumb fuck state finally decides is ready to deal with me.
Wait - what was the AI tool and how did it have her face to begin with? If small-town police are doing face-matching searches across national databases then nobody is safe because the number of false positives is going to be MASSIVE by sheer number of people being searched every day.
Pretend the tool is 99.999999% specific. If it searches every face in the USA you're still getting about 3 false positives PER SEARCH.
You will never have a criminal AI tool safe enough to apply at a national scale.
It's not an AI error. It's a human error in mis-using AI in this way. Saying it's an AI error is like saying a hole in your drywall is a hammer error.
Unfortunately we'll probably see a trend of people using AI and then blaming AI for cases where they mis-used AI in roles it's not good for or failed to review or monitor the AI.
It's both. It's good to acknowledge that AI is easy to misuse in this manner but it doesn't detract from the fact that the ultimate responsibility lies in those that should be verifying the tool output.
There is far too little skepticism around the magic box that solves all problems which is causing issues like this. It's not the fault of the AI (as if it could be assigned liability) for being misused, but this kind of misuse is far too common right now so scare stories like this are helpful and we should highlight the use of AI in mistakes like this.
That is a huge danger. Legally speaking it's not an issue since misusing a tool doesn't relieve liability (in most circumstances - all the trivial ones at least)... but that's a more significant political issue as evidenced by the Anthropic vs. DoD interactions since the DoD's actions are largely immune to oversight by the justice system.
Of course, that depends on sane non-politicized courts which you may rightfully doubt exist right now - but assuming the system works anywhere near as designed outsourcing a decision to AI wouldn't change liability.
For DC fans Harvey Dent would similarly not be free from liability for actions taken after a coin flip even if that coin could be viewed in a certain light to have the power to potentially force or prevent certain actions. An AI box that tells Harvey whether to shoot or spare would be similarly irrelevant to his liability - and a scenario in which Harvey points the gun at someone and then walks away giving the AI control over the trigger is essentially no different. Harvey in all cases is responsible for constructing the scenario that (potentially) leads to someone's death and, more over, even if the gun wasn't fired because the AI decided to spare the person Harvey would be on the hook for attempted murder.
How many more articles are we going to see with the headline AI facial recognition leads to innocent person jailed? A grandmother no less.
Some tech company illegally scanned people's photos on social media and now is using them with our complicit legal system to randomly put people behind bars. Now I need to worry that any day now due to a dice roll I will be sent away in a the middle of f'ing nowhere for months or years. Now the government wants to use these same dumb systems to make automated killing machines. FML!
I see a lot of comments trying to attribute blame to the cops, the lawyers, the police chief, the marshals, the tech bros, etc, but it is all of them and all of us that are guilty. We are so complicit in this sick system we live in. We are stuck in a collective action deadlock.
That fear you have in the back of your mind that says next time it might be you is counteracted by the thought "well thank goodness it wasn't me or a loved one," so you don't act. We are all doing this, that is why nothing changes.
The only people able to act these days are the most insane. The narcissistic corrupt power hungry politician, the psychopathic tech bro billionaire, and the jacobins are the only ones with the energy wade through this cesspool and that is why everything is so dystopian.
It's annoying that both articles are calling this AI error. This was human error, the police did the wrong thing and the people of Fargo will end up paying for this fuckup.
I would argue it was both. No doubt this company was marketing it in a way to make it seem very reliable. And all of the procedural things afterwards made the error so much more damaging.
But imo this is why local police departments should not have access to this kind of tool. It is too powerful, and the statistical interpretation is too complicated for random North Dakota cops to use responsibly. Neither the company nor the PD have an incentive to be careful.
It's not an AI error. The face recognition AI simply said that it's a "potential match", which is correct. It's the humans' job to confirm that a potential match is in fact a match, especially when the suspect is 1,900 kms away.
I live in Fargo. The police chief announced his retirement yesterday. Done by the end of the month. And then today this article comes out. So now we pretty much know why the sudden retirement announcement.
We are rapidly becoming a world where every person is one inscrutable LLM decision from having their life ruined with no recourse.
This type of incident isn't new and is only going to get worse. The problem is our governments are doing absolutely nothing about it. I'll give two examples:
1. Hertz implemented a system where they falsely reported cars as being stolen. People were arrested and went to jail for rental cars that were sitting in the Hertz lot. Hertz ultimately had to pay $168 million in a settlement [1]. That's insufficient. If I, as an ordinary citizen, make a false police report that somebody stole my car I can be criminally charged. And rightly so. People should go to jail for this and it will continue until they do. These fines and settlements are just the cost of doing business; and
2. The UK government contracted Fujitsu to produce a new system for their post offices. That system was allowed to produce criminal charges for fraud that were completely false. People committed suicide over this. This went on for what? A decade or more? But resuted in a parliamentary inquiry and settlements. It's known as the British Post Office scandal [2]. Again, people should go to jail for this.
The choice we as a society face is whether to have automation improve all of our lives by raising everyone's standard of living and allowing us to do less work and less menial work or do we allow automation to further suppress wages so the Epstein class can be slightly more wealthy.
I'm banned from Amazon KDP publishing for life because a fraud detection bot hallucinated that my e-book was plagiarizing my paperback (it didn't realize they're the same book). A bunch of email appeals that I'm pretty sure were also bots went nowhere. With each appeal, the reasons for my ban got progressively more vague, until they didn't mention the plagiarism part at all, just something nonsensical about creating a negative customer experience. Evil company.
What’s remarkable to me, beyond the total incompetence and stupidity of all the police people involved, is how incredibly aggressive the intervention was.
This is bank fraud case, for god’s sake, not an armed robbery. I don’t know the scale of it, but still, no one said she was a danger to anyone. She was a suspect, not a convict, and she was held at gunpoint while babysitting young children. What in the fucking world?
The US is so fucked up lately. People should chill the fuck out.
Completely infuriating, but more of a commentary on the sad state of incompetent power-hungry law enforcement with tools they don't know how to use than the tools themselves.
Though, the question remains: are the tools built in such a way as to deceive the user into a false sense of trust or certainty?
But it’s glaringly obvious that if you build tools like this and give them to the US police this is the outcome you will get. The toolmakers deserve blame too.
> are the tools built in such a way as to deceive the user into a false sense of trust or certainty? _Some_ of the blame lies on the UX here. It must.
Are AI code assist tools built in such a way as to deceive the user into a false sense of trust or certainty? Very much so (even if that isn't a primary objective).
Does any part of the blame lie on the UX if a dev submits a bad change? No, none.
You are ultimately, solely responsible for your work output, regardless of which tool you choose to use. If using your tool wrong means you make someone homeless, car-less, and also you kill their dog, then you should be a lot more cautious and perform a lot more verification than the average senior engineer.
I agree with all that. Maybe the word isn't "blame," then. Surely there must be some code, perhaps moral or ethical, but ideally more rigorously enforcible, which ought to prevent the development of intentionally deceiving tools. Sure you could say this about all software, but that which can cause actual physical harm ought to be held to a higher standard.
Yes, unfortunately technology is advancing faster than the average human brain evolves more neurons, so it will only become less comprehensible to the average person.
That's setting aside the tendency for police to hire from the left side of the bell curve to avoid independent thinkers that might question authority, refuse to do bad shit, etc.
> they don't know how to use than the tools themselves.
No, the tools work perfectly as they were design to work. The problem is that the tools are flawed.
Ultimately, every single of these decisions should be approved by a human, which should be responsible for the fuck up no matter what the consequences are.
> _Some_ of the blame lies on the UX here. It must.
No, the blame lies with the person or the group who approve the usage of these tools, without understanding their shortcomings.
>> are the tools built in such a way as to deceive the user into a false sense of trust or certainty? _Some_ of the blame lies on the UX here. It must.
> No, the blame lies with the person or the group who approve the usage of these tools, without understanding their shortcomings.
The person who approved the tools might've understood, but that doesn't mean the user understands. _Some_ of the reason why the user doesn't understand the shortcomings of the tool might be because of misleading UX.
I don’t know what tool they used, but it was very likely not an LLM. They probably have some database of drivers’ licenses and they ran a similarity search against the surveillance footage. This poor lady happened to be the top match.
Even if it also output a score, that score depends on how the model was trained. And the cops might ignore it anyways.
Theres alot of talk about how the cops just misused the tool and its their fault not the AIs
Thats missing the point here. The point is that these tools provide crazy leverage, and thay can be good or bad. If used carefully it can definitely catch criminals faster, but when misused (or abused) they can let the authorities unjustly ruin lives faster
The question isnt whether AI is perfect or not. Its whether you trust the authorities with it. To use and abuse as they can. Think about the average cop. Think about the way Trump treats people. Think about the way israel keeps an ongoing genocide going. Think about the cases of police brutality that happen in the US, the cases of racial profiling. Think about ICE and their behavior, going around kidnapping and killing people. Do you want these people to have more leverage?
I hate this headline (not blaming submitter). Police incompetence and negligence jailed her for months and left her stranded in a North Dakota winter. The AI is no more responsible than the cars and airplanes they used.
Edit: this is in reference to the original headline "AI error jails innocent grandmother for months in North Dakota fraud case" not the revised title that it was changed to.
If it didn't erase accountability, how would it create any value?
Many people are treating this as a matter of philosophy, which it isn't.
At a primitive, physiological level if you delegate to AI and most of the time you don't get in trouble for it, the resulting relationship you have with the AI could only be called "trust".
If you're expected to be 40% more productive at your job, your employer is making it crystal clear that you will trust the AI or you will be fired. Even if nobody ever said it, the sales pitch is that AI does the work and people are mostly there to be their servants whose role is to keep them fed with decisions we want made but don't want to be responsible for making.
The value is creates is obvious: finding a needle in a haystack. Is accountability laundering another potential benefit? Sure. Can we stop pretending we don't understand understand the other side of it? Cynicism is nice and all but after a certain point it eventually wraps around and makes us look naive.
Your picking apart the words doesn't matter if police are more incompetent with AI than without it. AI being the catalyst to a worse society is a more interesting and worthwhile topic than whether "AI is responsible" is the right way to phrase it.
If you make the AI software, then your software malfunctioned.
If the laser printer screws up a page in the middle of the document, and the user doesn't catch it and includes it in the board of directors binder, the laser printer still malfunctioned.
As much as we try to reward the first person to submit the story, we also have to give credit to the person who submits the best URL and the best version of the story. It looks like your submission was killed due to being an archive.is link, which is not allowed as a URL for a submission (we need the canonical URL submitted to prevent people from using archive services or shorteners to mask domains that may be malicious).
Sometimes it's just a matter of luck as to who gets the submission right and gets the karma. Sorry it wasn't you this time, but keep submitting good stuff and you'll get your turn.
Because it has an updating-feed-like structure, in which new items can appear.
Knowing that there are (N) new items is so useful (to some people), that as far back as the 1990s, we developed technology called "RSS" to give you this superpower over a website that doesn't provide anything of the sort. One that simply updates with new stuff when you hit refresh, with no UI to indicate what is new/changed.
https://archive.ph/2026.03.12-183903/https://www.grandforksh...
> According to the court documents, the Fargo detective working the case then looked at Lipps' social media accounts and Tennessee driver's license photo. In his charging document, the detective wrote that Lipps appeared to be the suspect based on facial features, body type and hairstyle and color.
> Once they were in hand, Fargo police met with him and Lipps at the Cass County jail on Dec. 19. She had already been in jail for more than five months. It was the first time police interviewed her.
How is this the fault of AI? It flagged a possible match. A live human detective confirmed it. And the criminal justice system, for reasons that have nothing to do with AI, let this woman sit in jail for 5 months before doing even interviewing her or doing any due diligence.
There's a reason why we don't let AI autonomously jail people. Instead of scapegoating an AI bogeyman, maybe we should look instead at the professional human-in-the-loop who shirked all responsibility, and a criminal justice system that thinks it is okay to jail people for 5 months before even starting to assess their guilt.
> How is this the fault of AI? It flagged a possible match. A live human detective confirmed it.
Because we're seeing the first instances of what reality looks like with AI in the hands of the average bear. Just like the excuse was "but the computer said it was correct," now we're just shifting to "but the AI said it was correct."
Don't underestimate how much authority and thinking people will delegate to machines. Not to mention the lengths they'll go to weasel out of taking responsibility for a screw up like this (saw another comment in this thread about the Chief of Police stepping down but it being framed as "retirement").
I'm sorry but this is a piss-poor excuse. When I Claude code broken features, I'm responsible 100%.
Why are cops not treated the same way? OP is right, AI is totally irrelevant in this story.
If the point is "cops can't be trusted". Why do they have GUNS?! AI is the least of your problems.
I feel like I'm going crazy with this narrative.
> I feel like I'm going crazy with this narrative.
We're only getting warmed up. There are programmers on HN that will take the output of their favorite AI, paste it and run it. And we're supposed to be the ones that know better.
What do you think an ordinary person is going to do in the presence of something that they can not relate to anything else except for an oracle, assuming they know the term? You put anything in there and out pops this extremely polished looking document, something that looks better than whatever you would put together yourself with a bunch of information on it that contains all kinds of juicy language geared up to make you believe the payload. And it does that in a split second. It's absolutely magical to those in the know, let alone to those that are not.
They're going to fall for it, without a second thought.
And they're going to draw consequences from it that you thought could use a little skepticism. Too late now.
When you foster a culture of impunity and passing the buck, don't be surprised when they pass the buck to the inscrutable black box they bought.
You might even argue that's the purpose of the inscrutable black box.
AI is the new "it's policy."
The “I” in “AI” stands for “intelligence”. Cops are using AI facial recognition because it is being sold to them as being smarter and better than what they are currently capable of. Why are we then surprised that they aren’t second-guessing the technology?
Because they are supposed to possess minimum levels of intelligence found in homo sapiens, which includes not believing anything a salesperson says.
Also, their whole job is dealing with people who constantly lie to them.
Some police departments seem to actively reject candidates that have higher scores on IQ tests. Not that I think IQ test scores and actual intelligence are related but it clearly shows their intended target candidate group.
https://abcnews.com/US/court-oks-barring-high-iqs-cops/story...
There are two things occurring here.
Police get raises and recognition for closing cases. In general they don't care if you're guilty or not, that's someone else's problem. Same with the detective, same with the DA. The more cases they close they 'tougher they are on crime'.
The next thing occurring is
https://en.wikipedia.org/wiki/Computer_says_no
https://en.wikipedia.org/wiki/Computer_says_no
Similarly: https://en.wikipedia.org/wiki/Automation_bias
You're over-selling the minimum level of intelligence in homo sapiens.
What you're stating is your wishful thinking. Don't get me wrong. I'd also like what you say to be true. It very much is not. Quite the opposite, which is why salespeople "work".
The amount of AI bullshit Senior+ level developers just paste to me as truth is astonishing.
The AI is the authority having so much knowledge, that we hear a reassuring "Please continue" [0].
https://en.wikipedia.org/wiki/Milgram_experiment
As soon as we start to see a pattern of shitty vibe-coded software actually harming people via defects etc. (see: therac-25), I would hope that the conversation is about structural change to mitigate risk in aggregate rather than just punitive consequences for the individual programmers who are "responsible". The latter would be a fantastically stupid response and would do little or nothing to reduce future harm.
all accountability need not be punitive, we can certainly talk about systemic guardrails. What I find disbelief in, is someone saying the Chief of Police saying "We are not going to talk about that today?" is not the biggest scandal, but the AI is.
> someone saying the Chief of Police saying "We are not going to talk about that today?" is not the biggest scandal, but the AI is.
Who is this "someone"? OP's article and the discussion here are absolutely not neglecting the human factors and general institutional failure that made this possible. But it's also true that without these "AI" tools, it would never have happened.
Yea but this feels like when a Waymo ran over a cat, and a Human driver ran over a toddler and both got the same level of coverage in the media (actually the cat got more follow-up coverage). And I'm supposed to believe both issues are equally important.
No. That's gaslighting, and totally misplaced political activation.
What do you propose we do in the latter situation? The news isn't the value of the life that was (presumably lost). The news is the circumstances that made that loss possible. Human driver was maybe careless, or maybe didn't look. The child safety classes I took emphasized over and over again to look around your car and yard before backing your car out. This is a problem with a known solution that unfortunately still happens despite the best efforts to prevent it.
Waymo hitting a cat is obviously less tragic, but if it can hit a cat, what else can it hit? A toddler? A human? The wall of your kitchen? This is a problem that has no known solution; furthermore, it's a problem that the engineers at Waymo don't seem overly keen on solving quickly.
The technology seems highly relevant here. Plus, as we've seen in the software world, when a mandate comes from the top to use the shiny new magic AI tools as much as possible, the officer may have felt pressured to make arrests using the new system they paid a bunch of money for instead of second guessing whatever it spits out.
You are right IMO to question why North Dakota police were able to obtain this Tennessean woman in the first place, you’d think something like that should require far more sufficient evidence than facial recognition.
But, then what good is facial recognition for? Would it have been okay for this woman’s life to have been merely invaded because she matched a facial recognition system? Maybe they can just secretly watch you so you’re not consciously aware of being investigated? Should that be our new standard, if a computer thinks you look like a suspect you can be harassed by police in a state you’ve never even been in?
I just don’t see a legitimate way for AI to empower officers here without risking these new harms. That’s why I lean towards blaming the AI tech, rather than historically intractable problems like the reality of law enforcement.
Having a facial recognition match make you a suspect and cause the police to ask you some questions doesn't seem completely unreasonable to me. Investigations can certainly begin with weak forms of evidence (like an anonymous tip), you just require a higher standard of evidence for a search warrant, surveillance, or an arrest. A facial recognition match shouldn't be probable cause for an arrest warrant, but it still might be a useful starting point for a detective looking for actual evidence.
It is absolutely not reasonable to use low-quality photos to decide someone halfway across the country with no history of even leaving their local area is 'a suspect'.
You wouldn't know they had no history of leaving their local area unless you interviewed them.
You are exactly correct. Cops cannot be trusted. We spent a lot of time pointing that out in 2020. AI is the least of our problems with policing.
Unfortunately, a lot of people are certain it won't happen to them, and it has been practically impossible to establish any kind of accountability. It has only gotten worse since 2020.
You’re on the right track here but I don’t think it should be hand-waved away as “the least of your problems” - it’s yet another weapon that police in the USA can use against the population with impunity. They’re going to have to reckon with all of this in the coming years - cops having guns and armored cars, “qualified immunity”, the “stop resisting” workaround for brutality and now this AI
You can hold someone responsible only after they've actually fucked up. And with the way things move in the criminal justice system, that can take months to discover. Holding them responsible doesn't really fix anything, it's purely reactive.
But it's not totally irrelevant in this story.
Cops are already susceptible to confirmation bias, and for "efficiencies" they are delegating part of their job to apparently magical tools that will only increase their confirmation bias. And because it is for efficiency you can bet they won't be given extra time to validate the results.
What or who is at fault isn't either/or, it's a bunch of compounding factors.
You’re going crazy because up until this exact moment you’ve never had to confront the reality that these tools, placed into the hands of the common man, are viewed as authoritative and lack any accountability or consequence for misuse.
For anyone who has been victimized by law enforcement or governments before, we’ve been warning about this shit for decades. About the lack of consequence for police brutality. The lack of consequence for LPR abuse. The lack of consequence for facial recognition failures and AI mismatches.
You need to understand that by using these systems correctly and holding yourself accountable, you are in the minority. Most people do not think that critically, and are all too happy to finger the computer when things go badly.
And until you accept that, and work to actually hold folks accountable instead of deflecting blame away from the tool, then this won’t actually change.
Your answer presumes we cannot hold people accountable. I think that is incorrect.
Do you mean hypothetically could society hold law enforcement personnel accountable for mistakes, bad judgement, flagrant criminal conduct, horrendous abuse of any and everyone? Certainly, a large scale and comprehensive restructuring of America’s law enforcement and prosecutorial system is legally possible.
However, I hold to the opinion that if you are discussing actual reality, based on decades (if not the entire period post civil war, for near certainty) of historical examples and the current “majority” position of the US electorate: there is a nearly unqualified NO. We cannot, or will not, hold law enforcement accountable for even intentional, planned, and malicious conduct in a vast majority of cases. There is practically no accountability at all, and that’s just for thoroughly proven intentional conduct. Bad judgement, alleged mistakes, etc are even less able to result in any action.
The reality of the legislation and precedent ensure it. It’s not a bug, it’s a feature.
It's called qualified immunity. Many support its repeal. I hope you join them, and convey the same to your local representatives and candidates. Until it is reformed few if any officers or administrators of criminal justice in the United States will ever feel any type of accountability.
Short of video evidence of blatant gun to the back of the head style homicide qualified immunity means most law enforcement officials are never held accountable for their miscarriages of justice. Criminal charges against officers are exceedingly rare. She should be able to sue this detective directly. Of course she can sue the government too, and should. But without any personal consequences for the people carrying out these acts, taxpayers will continue to bail out these practices without ever noticing. Your own government should not be a shield for a police officer who has violated you or your neighbors.
> Many support its repeal.
There's nothing to repeal. Qualified immunity is a doctrine that the judicial branch made up out of thin air, with no legislative backing.
But agreed, we need legislatures to write laws that expressly hold police accountable, and declare that they are not shielded from liability when things go wrong due to their own failures and negligence.
Not that it changes your point, but, um actually:
While the origins of qualified immunity are judicial, some State loved the idea so much the went and made it statutory too. Louisiana’s 2024 bill explicitly removes negligence as an exception (which is a valid method to circumvent qualified immunity based on jurisprudence at the federal and most state levels). Louisiana requires intentional violations or criminal actions to even be able to bring a claim.
> Short of video evidence of blatant gun to the back of the head style homicide qualified immunity means most law enforcement officials are never held accountable for their miscarriages of justice.
And frequently not even then.
I mean, this is the USA we're talking about. Cops are given huge authority over everyone else, with poor accountability. AI just lets them pretend to be even less accountable. And by "pretend" I of course mean "get away with it".
When are cops ever treated the same way as the rest of us?
Well in most cases I would prefer to have a cop's word to outweigh a word of an average joe.
You should tell that to Angela Lipps, I'm sure she told every cop she came in contact with she had never been to Fargo. Cops have a responsibility to do their job, part of that job is listening and relying on proof. ALL those cops were either too lazy or were afraid of their superiors. This is unacceptable for the amount of power and information they have access to. We should either de-fund the police system or reform the hell out of it. BTW, where was her state representative during this fiasco?!?
The belief by a juror that law enforcement personnel, especially phrased as a belief that applies to law enforcement personnel as a generic group, is a well established basis for a challenge for cause leading to exclusion of that person from being a juror. The US jury system is build explicitly on excluding these types of belief in juries in order to ensure fairness, impartiality, and individual and case/witness specificity of “triers-of-fact”.
I could understand someone who disagrees with it, but your position would be antithetical to current and historical thought on what defines a fair jury.
Do you think police are inherently more honest than everybody else? Why would you think that?
Why should having that particular job give you that privilege? All should be equal before the law.
It's not even just incompetence, but malice. "AI says so" is going to be the perfect catch-all excuse for literally everything anyone might want to do that they shouldn't. You know how techbros love to excuse every horrifying outcome of their torment nexi with "don't blame me, the algorithm did it"? It's going to be like that, but now everyone can do it.
It's also why people start parroting the phrase "the purpose of a system is what it does". Look at where we are right now: a precipice before this becomes widely used in all forms of policing. We still have a chance to police the police's use of the AI.
The purpose of using AI to identify suspects in criminal cases is to ease the burden of manual searching for a suspect (or insert whatever the purpose of statement you want). Ok, but we're getting false positives that are damaging people's lives already in the early stages. And I don't want to hear "trust me bro, it will get more accurate" as an excuse to not regulate it.
At a minimum, we should enshrine the right to appeal AI and have limits on how it can be used for probable cause.
This isn't even the only recent case of this happening. There was another case of mistaken identity due to AI. [0] Sure 4 hours isn't the same as 5 months, but still this guy wanted to show multiple forms of ID to prove who he was! The bodycam footage was posted a few months back but never got traction here.
Like if the police officer can't read numbers, they can't do breathalyzer tests on people. If the AI can't be used responsibly, then it can't be used at all.
[0]: https://www.youtube.com/watch?v=lPUBXN2Fd_E
So what? There were false arrests and convictions made by misuse of line-ups, DNA, eye-witnesses, photos, bloodstains, fingerprints, etc. since forever. You must also blame all those other technologies, so what do you think the police should use to find suspects? In your view, the more help police have, the worse a job they'll do. Is that actually the trend?
With all other proof you mentioned, there was always a human putting his signature.
Now that they can blame "AI" no specific officer(s) will take the blame, ever. If no one is responsible there will be many more false positives.
And false positives destroy lives
> With all other proof you mentioned, there was always a human putting his signature.
There was a human doing that in this case; AI doesn’t inititiate charges. “In his charging document, the detective wrote that Lipps appeared to be the suspect based on facial features, body type and hairstyle and color.”
So what???
This woman lost most of her material possessions, was terrorised by "goons"... The police do this stuff regularly, as black people, immigrants, "white trash" etcetera know well. Another opportunity, presented BY AI models for more routine police oppression
As the wise singer said: "Fuck the police!"
Exactly, it's the police's fault, as well as the wider system they operate in that enables that kind of abuse, and they do it anyway even with out AI.
AI is, in this case, a tool enabling it, because trawling large databases using AI allows finding people with a degree of similarity to a suspect that would reasonably constitute probable cause int he context of what was until fairly recently the norm for police work because that work relies on proximity and connections to the crime. The understanding of probable cause and what is necessary for it , given the actual investigative process in the case, including the use of large databases unconnected with the events and locality of the crime needs to adapt.
You're right that they often do a lot of harm.
The point that you're missing is that, in a system where such abuses are possible, many of us really don't want one more tool in their box for them to fuck us with.
Like, they already prove themselves incompetent- giving the power to track anyone in the US via a distributed ALPR system just makes them more dangerous. Giving them all these "AI" based tools does the same.
This particular "AI bogeyman" isn't just AI; it's cops with AI and in particular cops with facial recognition tools, dragnet LPR surveillance tools, and all this other new technology that essentially picks somebody's name out of a hat to have their life temporarily (or [semi-]permanently) ruined by shithead cops who won't ever face any real accountability.
This keeps happening, and the reason it keeps happening is that shithead cops have these tools and are using them. Until we can find a reliable way to prevent this from happening, which may or may not be possible, cops who may or may not be shitheads should not have access to these tools.
Yes! This is about why mass surveillance and dragnets and the like are horrible. These all suffer from people not being able to understand the base rate fallacy (https://en.wikipedia.org/wiki/Base_rate_fallacy)
Even if AI facial recognition gets really really good, and is 99.999% accurate, if you use it in this way you are going to arrest more innocent people than guilty people.
If you find a suspect, who has a lot of evidence pointing to them being the criminal and you run a test that is 99.999% accurate and it tells you they are guilty, they are probably guilty.
But if you take that same test and run it against the entire population of the country, it is going to find 3500 people that match with "99.999% certainty" That gives you a 0.02% of the person being guilty.
People don't think like this, though, so they think the person must be guilty.
It’s also cops Making the Numbers Go Up by marking down a case file as having progressed because someone is in custody. Which isn’t about justice.
They don't seem to give a single iota of a fuck about that when a private regular person has their money stolen or their car totaled by hit and run driver. Finding some innocent person to arrest would indicate they are at least pretending to give a fuck, yet they seem to only be bothered to even keep up appearances when it is the bank being robbed.
mate Capitalism 101
Sorry, I disagree. This is an example of the corruption inside the American legal system. The cops are at the level of us regulars, and their judgement and actions seem to have no supervision or accountability.
It's not just the shithead cops, it's the voters. All the "Blue Lives Matter", "thin blue line", "back the blue" propaganda works towards giving police infinite powers with zero accountability. This is what voters want and they've said so loudly over and over again.
There’s nothing wrong with your comment per se, but it’s almost as if you didn’t even read the comment you’re responding to.
Let me help you out with this comprehension issue. The point of my comment is that I disagree with the apparent premise of the comment I replied to, which is that "AI" is some generic investigative tool that we can neatly snip out of the picture to blame this incident on human factors at the individual level ("the professional human-in-the-loop who shirked all responsibility"). Said comment also implies that people are fixating on the AI aspect of this issue while ignoring the human factors, which IMO is a strawman. To me, the existence of AI in its current incarnations and the ways in which law enforcement will inevitably abuse it are, together, inseparably, the problem. AI (in the most general sense) opens up entire new dimensions for potential abuse.
As a concrete example:
> And the criminal justice system, for reasons that have nothing to do with AI, let this woman sit in jail for 5 months before doing even interviewing her or doing any due diligence.
Let me state what should be obvious: without AI (as in, the facial recognition systems involved in this case), this woman would not have sat in jail for 5 months, or indeed for any length of time at all. So saying that it has "nothing to do with AI" is totally ridiculous.
> Let me state what should be obvious: without AI (as in, the facial recognition systems involved in this case), this woman would not have sat in jail for 5 months, or indeed for any length of time at all.
How do you arrive at that conclusion? Because it happened, and it wasn't an AI overseeing (the lack of) due process. The police identifying suspects is part of their job. So are arrest warrants and all the rest of it. I honestly don't see what AI had to do with anything here. All I see is a gaping systemic issue that could have happened regardless of AI if the wrong person got the wrong idea or had a personal vendetta.
Suppose ICE busts down someone's door, drags them off, holds them in an internment camp for months, and then finally goes "oh, oops, guess you were a citizen all along sorry about that" and releases them. We don't blame the source of their faulty hit list. We blame the systemic practices and legal apparatus that permitted it all to happen in the first place.
You might as well blame the SUV manufacturer because without vehicles the police wouldn't hav been able to drive over to make the arrest, right?
> How do you arrive at that conclusion?
Because it's beyond obvious? How would this woman have ended up in jail if she hadn't been misidentified by the facial recognition software in use by the Fargo police? She lives 3 states over; would be a hell of a coincidence if some other avenue of investigation led them to her.
> I honestly don't see what AI had to do with anything here.
You honestly don't see what facial recognition software had to do with a woman being misidentified by facial recognition software?
> Suppose ICE busts down someone's door, drags them off, holds them in an internment camp for months, and then finally goes "oh, oops, guess you were a citizen all along sorry about that" and releases them. We don't blame the source of their faulty hit list.
I actually am completely willing to blame any entity that supplies ICE with the names of people it can reasonably assume will be targeted for "enforcement action" due to said entity representing said names as being legitimate targets for said enforcement action, without taking reasonable care to ensure said representation is correct in each and every case.
What you don't seem to understand is that these abuses of law enforcement authority are predicated on at least an appearance of legitimacy, which can be provided by (e.g.) an app with (presumably) a very official looking logo that agents can point at somebody to get a 'CITIZEN' or 'NOT CITIZEN' classification. It is upon this kind of basis that they perform illegal arrests. All parties—the app vendor and ICE, as well as the people who are meant to be overseeing ICE and providing accountability—are complicit enablers in these crimes. To absolve the vendors who provide the software knowing full well what it will be used for, what its limitations are, and how unlikely it is that ICE personnel will understand those limitations and work around them to keep everything legal, is totally absurd.
It isn't obvious, no. If I drop a hammer on my foot and break my toe I can't then blame the hardware store or the manufacturer. If the store didn't carry hammers I wouldn't have been able to purchase it, I think to myself. Then I couldn't possibly have dropped it on my foot, thus my toe wouldn't be broken right now. It's a specious line of reasoning.
It doesn't matter in the slightest by what means she was selected to "win" this particular lottery. The tool rolling the dice isn't to blame. Tools (and people!) will occasionally return spurious results. Any system needs to be set up to deal with that.
So no, I honestly don't see what facial recognition software has to do with gross negligence and process failure on the part of multiple government agencies.
> without taking reasonable care to ensure said representation is correct in each and every case.
Only if that was part of the contract. Was the product delivered according to specification or not?
What if ICE used FOSS tools to put together the list themselves? Are the tools still to blame? That would obviously be absurd.
The only way the provider (never the tool) could be at fault would be something such as willful negligence or knowingly and intentionally attempting to manipulate the user's actions to some end.
What you don't seem to understand is that human negligence can't be foisted off on tools. Of course an abuser will try to play his actions off as legitimate. That isn't the fault of the tool, it's the fault of the abuser. It isn't up to an app to determine the legitimacy of LEO agent actions. Neither is it the responsibility of an arbitrary, fungible government contractor to oversee ICE.
I think you're confusing the morality of participating in a broader ecosystem with moral culpability for the process failure associated with a specific event. You can advance a reasonable argument that AI companies that choose to do business with ICE are making an at least moderately immoral decision. However that doesn't place them at fault for the specific process failures of any particular event that happens.
If you don't agree that facial recognition software is involved in a case of a woman being misidentified by facial recognition software then there is no point in me spending any more time/effort in conversation with you. Goodbye.
You seem to be intentionally ignoring the point I made. I never disputed that facial recognition software was used (ie involved).
The facial recognition tool didn't arrest her. It holds no authority, has no will of its own, and does not possess a corporeal form with which to enact change in the world. The only parties that could possibly be at fault here are various government agents who clearly acted with negligence, failing to uphold their duty to the law and the people.
If you're unable to rebut my point then perhaps you should consider that you might be in the wrong? If you're unwilling to entertain such a possibility then I wonder why you're posting here to begin with. What is your goal?
Like I said, there wasn’t anything wrong with your comment. It just didn’t seem to directly address the parent comment. This does, thanks.
Seems like a direct response to me.
>> How is this the fault of AI?
> This particular "AI bogeyman" isn't just AI; it's cops with AI
You can’t separate the thing from how it will be used. It’s like arguing that cars on their own aren’t particularly dangerous, but the point of buying a car is to use it thus risking the general public.
But you can in fact argue exactly that. If (arbitrary example) pedestrians are being killed due to poor road engineering practices it isn't reasonable to point at cars and say "see those are the root problem" when in fact it's due to a willful lack of sidewalks or marked crossings or whatever. Being adjacent to something bad doesn't equate to being the root cause.
History shows the timeline of dependence here. Before the introduction of cars, “poor road engineering practices” wouldn’t result in those deaths. So clearly it’s cars that are necessitating sidewalks, etc.
Same deal here, if something “becomes a problem” because of the introduction of AI, it’s AI that is the root case of the resulting issues. Many people are tempted to argue that flawed humans can’t implement the perfect system that is Anarchy, Communism, Recycling programs, or whatever but treating systems as needing to operate on the real world is productive where complaining about humans isn’t.
Well I (thought it was obvious that) I was referring to roads constructed relatively recently. If cars necessitate sidewalks and the city chooses to cut costs by not putting those in that isn't the fault of automobile designers or manufacturers or dealers or private owners or whoever.
To your example, technology changes and that necessitates infrastructure changing. That doesn't mean that fault for mishaps in the meantime can be attributed to the new technology. A user operating the new technology in an obviously unsafe manner is solely at fault for his own negligence.
The safest street designs still result in automobile fatalities. You can at best mitigate the issue with better street designs but not address the underlying issue.
Failing to acknowledge cars as the root cause may be comforting, but it blinds you to viable solutions.
Indoor shopping malls for example solve many of the issues with cars by forcing people to move around on foot in a little island surrounded by a sea of very low density parking. They are’t perfect solutions, but they still saved a lot of lives and time.
Saying people are misusing a new technology is just another way of saying that technology is flawed. This doesn’t mean you can’t utilize it, but pretending flaws don’t exist has no value.
At this point I think that AI will perform human duties better than human. So probably it's better to let AI autonomously jail people, of course with all the necessary procedures as required by law.
Devils advocate: what if a facial recognition system with a large enough database can always find an unrelated/innocent person that looks similar enough to convince the human?
Reminds me of a case that just popped up in my neck of the woods.
Man gets pulled over on an expired plate. They search based on this fact, find a pill bottle (for Irritable Bowel Syndrome) and magically find he’s trafficking cocaine and fentanyl.
Months later a lab test exonerates the poor guy.
https://www.wyff4.com/article/deputies-falsely-identify-ibs-...
I've always maintained one of the worst things that can happen to you is sitting in court before a jury of your peers, because most can't comprehend the meaning of the law outside of their feelings. NOW the worst thing is having yourself in the hands of cops who just don't give a damn or became a cop for the use of power.
> How is this the fault of AI?
AI is being used by bureaucrats and enforcers to justify lazy, harmful conclusions. You don't live in the real world if you think "just punish the bureaucrats, don't make it about AI" is going to remotely rectify this toxic feedback loop and ecosystem.
No, we definitely should punish bureaucrats and enforcers who act negligently. If someone in a position of authority flagrantly fails to do his job and it directly harms someone he should be held accountable. That would provide a strong incentive for future actors to take their responsibilities seriously.
If an engineer signs off on an obviously faulty building plan and people die as a result we hold him accountable. This is no different.
It's not. This is just an acceleration in the unraveling of society facilitated by AI. As someone whose childhood included so many "robots will kill humans" books and movies, I am flabbergasted that the AI apocalypse will be dumb humans overtrusting faulty AI in important matters until everything falls apart.
Most humans cannot distinguish AI from actual intelligence. When you combine that with bureaucrats innate tendency to say, "Computer said so," you end up with bizarre situations like this. If a person had made this facial match, another human would have relentlessly jeered him. Since a computer running AI did it, no one even cared to think about it.
Computers are wildly dangerous, not because of anything innate but because of how humans act around them.
> It's not. This is just an acceleration in the unraveling of society facilitated by AI. As someone whose childhood included so many "robots will kill humans" books and movies, I am flabbergasted that the AI apocalypse will be dumb humans overtrusting faulty AI in important matters until everything falls apart.
This is literally the plot of most of those books and the way they differ is in how everything falls apart. In some of them the AI supplants us entirely and kills us all. In others it gets taught to kill us all. In others it gets really good at giving us what we ask for until everything falls apart. But it’s taken as a given that unless we change something innate in our culture AI will be our downfall.
> If a person had made this facial match, another human would have relentlessly jeered him.
The glaringly obvious problem here is that our justice system should not be constructed in such a way so as to be reliant on someone's coworker shaming him. That is not a sensible check against a systemic failure. We're supposed to have due process. If someone skips or otherwise subverts due process the justifications don't matter. The root issue is that due process was skipped. Why was that even possible to begin with?
Automation has a strong tendency to degrade diligence.
I see this all the time in operational / production settings. Having a loop with automation reviewed and approved by a human degrades very fast. I only approve automation that has a quick path to unsupervised operation.
> How is this the fault of AI?
The false positive rate combined with scanning millions of pictures might make the chance of arresting the wrong person really high.
100% 100% 100% humanity is so obsessed with ai that we're losing...our humanity. "blame the mindless, soulless robots! how could we have possibly known that they need to be supervised?! aren't they basically just humans that don't need to rest or eat?"
> How is this the fault of AI
It isn't, the article doesn't claim (or even imply) that it is "the fault" of AI, only that AI was part of the chain of events, and nothing is the fault of AI until AI is sufficiently advanced to constitute a moral actor. “At the source of every error which is blamed on the computer, you will find at least two human errors, one of which is the error of blaming it on the computer” remains true.
OTOH, it is potentially the fault of the reliance human actors put on an AI determination.
It's the fault of the tool because our society treats the tools as superior judgements than humans and to be trusted completely as a means of deflecting accountability - something any and every minority group has been warning about for fucking decades.
The reason everyone rushes to defend the tool's use is because holding humans accountable would mean throwing these tools out entirely in most cases, due to internal human biases and a decline in basic critical and cognitive thinking skills. The marketing has been the same since the 80s: the tool is superior (until it isn't), the tool shall be trusted completely (until it fails), the tool cannot make mistakes (until it does).
If folks actually listened to the victims of this shit, companies like Flock and Palantir would be gutted and their founders barred from any sort of office of responsibility, at minimum. The fact so many deflect blame from the tool like the marketing manual demands shows they don't actually give a shit about the humans wrapped up in the harms, or the misuse and misappropriation of these tools by persons wholly unaccountable under the law, but only about defending a shiny thing they personally like.
>rushes to defend the tool's use is because holding humans accountable would mean throwing these tools out entirely in most cases, due to internal human biases and a decline in basic critical and cognitive thinking skill
The magical past where people had critical thinking skills never existed. We put a lot of trust in tools is because people are unfucking reliable. Hence why in most cases actual physical evidence does a far better job than witness testimony.
This said, people are lazy. It is one of our greatest and worst traits. When we are allowed to be lazy, especially with tools bad things happen.
Study after study has shown a very strong and consistent bias of humans to trust "automated systems" in face of any ambiguity
> Instead of scapegoating an AI bogeyman
One big reason for AI adoption everywhere is that you can use it as a scapegoat
I think the biggest problem is that the popular narratives about AI enable this like of accountability sink.
I think it's more nuanced; it is one error in a Tragedy of Errors.
This was not a series of errors, this is (as a statistical inference) the system working as designed. This is not uncommon, it is not unplanned. The extradition of suspects from State to State is designed legislatively to function this way.
I also think there is more nuance to this situation than AI bad // Human Bad :: choose one. But while a tragedy, the ‘correct’ functioning of a system that produces tragedy doesn’t make that function and error.
Someone from the government should be in jail for this kind of oversight.
I think the taxpayers owe this lady at least a couple million if not more for the inconvenience they chose to put her through.
I agree, but our system doesn’t value things that way. Texas, which is one of the highest paying States for cases where intentional, fraudulent, or grossly negligent actions result in wrongful incarceration pay $80,000 dollars per year a person is locked up. But the caveat is that time only starts counting after you are sentenced, so wouldn’t even apply in TFA’s case.
its the only way this stops happening.
> How is this the fault of AI?
It could be the fault of the company that's selling this service. They often make wildly inaccurate claims about the utility and accuracy of their systems. [0]
> There's a reason why we don't let AI autonomously jail people.
Yes we do. [1]
> and a criminal justice system that thinks it is okay to jail people for 5 months before even starting to assess their guilt.
Her guilt was assessed. That's why she had no bail. It assessed it incorrectly, but the error is more complicated than your reaction implies.
[0]: https://thisisreno.com/2026/03/lawsuit-reno-police-ai-polici...
[1]: https://projects.tampabay.com/projects/2020/investigations/p...
To clarify one point, her not having a bail is a function of the way interstate ‘fugitive’ warrants are designed. The Court in Tennessee had no ability to set bail, and until she entered the physical custody of North Dakota she can not have bail set.
Also, her guilt was not assessed in any common meaning of the term. The requirement for holding a person in custody, with or without bail, is probable cause. The only thing assessed was did law enforcement present a statement to a Judge that was possible to be believed in the light most favorable to the prosecution.
computer said yes
lgtm
Where does it say that AI is blamed.
It says she was misidentified using facial recognition.
That’s exactly what happened
> How is this the fault of AI?
Humans being human. Getting lazy, being incompetent, getting incompetent with AI use or simply being biased. The wrongfully arrested person doesn't even resamble the perpetrator.
Maybe if they were held accountable forthese actions, they would act responsibly?
> How is this the fault of AI?
It is not. It is the fault of the police
AI models are tools. When mistakes are made they are the mistake of the operator of said tool
This AI model was badly misused, this woman should get a metric shit tonne of compensation, but it was the fault of the police.
I hope you take this as a teaching/learning opportunity
There's no way this isn't a slam dunk case to sue the piss out of the Fargo Police, probably the US Marshals and maybe other orgs. The woman in the surveillance phone clearly looks way younger, among the many other obvious signs this woman didn't do it. I hope she wrings at least several million dollars out of the government.
It literally doesn't matter -- you're focused on the wrong thing. She could be that woman's exact twin and it wouldn't matter. Spending six months in jail and losing your house, your car, and your dog with the flimsiest of evidence is ridiculous.
'you can beat the wrap but not the ride' has been a pop culture reference in the US since the 1940s. Our society wants/supports the ability for this to be inflicted at police/court whim on people.
A lawsuit is exactly what matters. They learn only the hard way, and no other way. If you want them to not be ridiculous, a lawsuit with large punitive damages is the only practical way to get there.
I disagree. The city or state gets sued and they pay the result from the taxpayer funds and literally nobody learns anything, especially not the hard way. Everyone is so completely divorced, and in some cases immune, from consequences that this will change nothing.
After a couple million dollar lawsuits the city or state will learn to be more careful with their methods. It's the taxpayer funds, but it's not an endless supply of money. Cities and states have their own budgets.
More than $1.5 billion has been spent to settle claims of police misconduct involving thousands of officers repeatedly accused of wrongdoing.
https://www.washingtonpost.com/investigations/interactive/20...
Good point! It shows that the settlements are far too low and that the victims should get a lot more.
If a few cities/states were to default due to debts coming from such cases, the others would start to take notice...
Do we have any evidence that these lawsuits have no effect on the number of wrongdoings?
Did you see the word "repeatedly"?
> After a couple million dollar lawsuits the city or state will learn to be more careful with their methods
You'd think, but watching how many millions my local police department and city paid out every single year leads me to believe they just don't care.
How many, exactly? Anyone can wave vagueness around. Do you have numbers or no?
I haven't lived there in years, nor do I have exact numbers, but they make national news enough for the same problem nearly every year. I'll drop you some links if you care.
1 - 38 million between 2017 and 2022.
2 - 29 million in 2023.
3 - 12 million in settlements in 2025.
Dare I keep going?
[1]https://www.wdrb.com/in-depth/louisville-payouts-for-police-...
[2]https://www.aol.com/louisville-paid-least-29m-settle-1030450...
[3]https://www.courier-journal.com/story/news/local/2026/02/04/...
The region's GDP is 100 billion dollars, so these are tiny amounts, although they may seem large to some.
And the first article you link proves that people are already worried about it. You think they can safely 10x that?
> The region's GDP is 100 billion dollars, so these are tiny amounts, although they may seem large to some.
It's a fair point and easy to handwave away "it's only $100 per resident." But it's a lot of money still. And yet that city is shutting down schools and selling off school properties to make budget this year. I bet they'd love to have those wasted millions.
> You think they can safely 10x that?
I have no idea the reason for this question. The OP said cities learn after a couple million dollar suits. I'm showing that no, they do not. If anything suits are increasing.
There’s a heck of a lot of individual cities and states. Their ability to remain solvent is greater than your ability to stay out of jail.
The cities and states make laws to better govern police behavior. You can look back on a century of history of this.
With all the lovely qualified immunity doctrine? That's wishful thinking.
That may protect them personally, but not the city and the department itself from being sued.
Nope.
https://abovethelaw.com/2016/02/criminally-yours-indicting-a...
You can be arrested, indicted, and held in jail on pretrial, and there is literally no recourse. There are many other ways jail can happen without due process. Where I live:
* Civil contempt. Absolutely immunity. No due process. Record is about 16 years. Having a bad day? Judge can toss you in jail.
* "Dangerous." Half a year. No due process. He-said she-said.
* "Insane." Psychiatric hold. Three days. Due process on paper, not in practice. Police in my town can and do use this if they don't like you.
Absolutely no recourse. You come out with a gap in income, employment, and, if you missed rent/mortgage, no home. Landlords will simply throw your stuff away too.
You're also basically damned if things do move forward, since from jail, you have no access to evidence, to internet (for legal research), and no reasonable way to recruit a lawyer (and, for most people, pay for one).
Can happen to anyone. Less common if you're rich and can afford a good lawyer, but far from uncommon.
I don't know what you're responding to, but I don't think it's my comment.
Qualified immunity protects individuals, not departments, from liability.
The particular thread (in this thread) that I was responding to:
>> I hope she wrings at least several million dollars out of the government.
> With all the lovely qualified immunity doctrine? That's wishful thinking.
I was responding to the claim that qualified immunity protected the government, it does not.
The GP seems to be suggesting that there's no recourse at all, usually. You might bring suit against a police department or LE agency, but you'll fail to find any relief there. True that qualified immunity only protects individuals, but there's a raft of other things that makes it hard to get a judgement against a police department, too.
I think there's probably one major exception: civil rights violation investigations. But even then, the people doing the investigating seem to be biased toward the LEOs.
The GP's linked article doesn't seem to even talk about this, so not sure why that's there.
> You might bring suit against a police department or LE agency, but you'll fail to find any relief there.
I don't know if I'd go so far to say she won't find any relief, but it probably still could be a pretty tough Monell claim against the department (although it's hard to tell from the sparse details in the article):
"[A] local government may not be sued under [42 U.S.C.] § 1983 for an injury inflicted solely by its employees or agents. Instead, it is when execution of a government's policy or custom, whether made by its lawmakers or by those whose edicts or acts may fairly be said to represent official policy, inflicts the injury that the government, as an entity, is responsible under § 1983." [1]
I could see a problem if there was a policy/custom of relying on AI facial recognition alone without any other corroborating evidence (would be a really stupid practice, but I'm sure stupider things have become part of a police department's systemic practices). Or if there was a failure to sufficiently train detectives about the erroneous tendencies of this technology. Maybe the needlessly prolonged detention without bail could be an issue if there was a lack of adequate protocols to expedite in a reasonable amount of time.
Either way, still seems hard to say this a slam dunk case for her, unfortunately. But also seems too risky for the city of Fargo to not settle, at least nominally.
[1] Monell v. Department of Soc. Svcs., 436 U.S. 658 (1978), https://supreme.justia.com/cases/federal/us/436/658/
>* "Insane." Psychiatric hold. Three days. Due process on paper, not in practice. Police in my town can and do use this if they don't like you.
A friend of mine was committed longer than 3 days without council or the ability to represent themselves in the hearing. Apparently the whole process of being committed is ex parte in practice in some states.
This is a bit hyperbolic and the exaggerations really undermine what I think is your broader point (that there is rarely recourse when you're held for short to moderate amounts of time). It is hard for me to believe that someone was held for 16 years on civil contempt without due process or that someone was held for half a year without due process after being deemed dangerous. The reason that is hard for me to believe is that the due process is implicit in the action you describe. Civil contempt is from a judge which implies that you're already in court - that's due process. Someone being labeled "dangerous" implies that a finding was made by a neutral party - that's due process.
Just because you disagree with the outcome doesn't mean that due process wasn't given.
Yeah it's "due process." In civil contempt the judge is a witness and prosecutor in the very "process" they're judging. That's the most perverted form of due process imaginable.
A judge should have to recuse themselves if they are acting as witness to the supposed infraction.
Civil contempt isn't some roving criminal charge that jumps out of the jury box randomly. It's meant to make somebody comply with a court order. Anybody in civil contempt holds the keys to the jailhouse door in their own hands, all they have to do is comply.
This statement should make you uncomfortable. It makes me uncomfortable because it is a pure expression of the power of the state. But it's still due process.
In Criminal Contempt max duration of imprisonment is limited. In civil it is not until somebody decides that one never complies. You may call it due process. I call it for what it is - A torture and fucking crime against humanity. Judge that holds person for years for being stubborn deserves nothing more than walk the plank
Criminal immunity? Sure. Civil immunity? Nope! She could definitely make a nice buck.
Qualified immunity doesn't apply to criminal cases. It is used to defend against civil suits. It is unfortunately very easy to find many cases where it leads to injustice. For example:
>...Abby Tiscareno, a licensed daycare provider in Utah, was wrongfully convicted of felony child abuse when a child under her care suffered brain hemorrhaging. After calling emergency services, subsequent medical tests supported these findings. However, during her trial, requested medical records from the Utah Division of Child and Family Services (DCFS) were not provided. It wasn’t until a civil suit that Ms. Tiscareno saw pathology reports suggesting the injury could have occurred outside of her care. She was granted a new trial and acquitted. Her subsequent lawsuit for due process violations, alleging that DCFS failed to provide exculpatory evidence, was dismissed due to lack of precedent indicating DCFS’s obligation to produce such evidence.
https://innocenceproject.org/news/what-you-need-to-know-abou...
Off of taxpayer money sadly. Imo we really need a fix for this. When cops are grossly negligent the money should come out of their aggregate pension fund (or at least partially).
> we really need a fix for this. When cops are grossly negligent the money should come out of their aggregate pension fund
This is on us as voters. If we didn’t piss our pants every time a police union sneezed, we’d realize wholesale restarting police departments is precedents in even our largest cities.
Yes, this is the key point. Tax payers get a nice big bill while the people who caused the problem get a nice paid vacation while they conduct an internal "investigation" that typically finds they did nothing wrong.
There is a fix to it. Elect people who will hold them accountable.
As long as you keep electing clowns that let the police do whatever they want, the police will... Do whatever they want.
Yeah, of course they need to held accountable, and we need to vote in people who will do so. What I'm suggesting is an alignment of incentives that will ensure that police will try to do their best to not be negligent.
Of course there's a balance that has to be struck so that police are empowered enough to act. So perhaps something like settlements against the police being 30% borne by the police pension fund and 70% by taxpayers is sufficient. I think this will also make police very enthusiastic about bodycams and holding each other accountable.
I'm usually a big supporter of labor unions, but police unions in the US generally have an outsized amount of power, and even when mayors etc. want to hold police accountable, the union ends up bending the mayor over a barrel.
I'm not sure what the solution is here. Forbid police from unionizing? That would probably have some bad consequences too.
“Tough on crime” -> lenient on police -> innocent grandmas in jail.
despite this being something practically everybody wants, the fact that it hasn't happened is not a coincidence and speaks to the power of police unions/guilds and their lobbying arms. outside a few toothless instances, those groups are extremely good at reframing these attempts and mobilizing their bases to vote against the broader public interest.
it sucks.
> despite this being something practically everybody wants,
No, everybody does not want police accountability. Half the population will fall on a grenade to prevent that. They know that the purpose of the police is to keep the undesirables in line, and they never envision that they will ever fall in that category.
The brutality is the point for them.
oh, i generally don't disagree with you on that point; i specifically meant that when presented with the question "do you want your tax dollars to pay for police liabilities?" the answer is probably almost always "no".
Sure. But when you ask "Do you want the police to be unable to do their job and live in a lawless hellscape ran by gangbangers and ISIS cartels?, the answer is also 'No.'
The problem is that the mass media sets the framing of acceptable discourse, and that mass media is in large part an ideological monoculture. And even when it's not, it is happy to present absolutely insane batshit lunacy as 'one of the two sides' of an issue.
Almost all taxpayer funded pension funds are already underfunded. It makes no difference if the funding decreases or increases, the government employee will still get their benefit. The government would have to go through bankruptcy to get the benefit amount reduced.
imho the US Marshals are the only innocent party here, as my understanding is they don't do investigations and just serve warrants without any knowledge of the underlying case.
>I hope she wrings at least several million dollars out of the government.
which the citizens end up footing the bill for. yay.
Maybe the citizens will learn to elect better leaders.
Thanks, I needed a good laugh this evening.
Maybe they'll realize votes have consequences.
People famously do not learn from the experiences of others. It's a big reason why life is so hard when you'd expect it to be pretty easy based on our collective experience.
“Unable to pay her bills from jail, she lost her home, her car and even her dog.”
Who stole her dog?!
Probably picked up by animal control as abandoned and euthanized.
That’s really horrible. I’d prefer to know rather than guess at that.
It's pretty common when a dog is abandoned. Likely her children couldn't afford to care for it. I suppose there is a chance they put it up for adoption (same outcome is likely).
Sorry. I really fucking hate this. I don’t imagine there is a charity somewhere that works against this?
No large scale orgs that I know of. Our local bar has an attorney who does work against it, she has her number at the jail where other inmates will pass her number around if some mentions their dog, and intake officers will often suggest to inmates that if they have pets to call her. She is absolutely the most hard core lover of dogs I have ever met, and she will literally drive/run into danger to get to a canine to get it to a local non-kill shelter.
I imagine if there had been anyone intervening in her favor it would have been to resolve her case faster than 5 months.
> facial recognition showed she was the main suspect in what Fargo police called an organized bank fraud case.
> Her bank records showed she was more than 1,200 miles away, at home in Tennessee at the same time police claimed she was in Fargo committing fraud.
> Unable to pay her bills from jail, she lost her home, her car and even her dog
The Fargo DA should be fired, at a minimum.
I can almost guarantee that the Fargo DA’s office had zero idea this happened and had never heard about this investigation before the news story. At this stage in a “case” it’s completely on law enforcement and there is no involvement by a DA’s office for the arrest warrant or the extradition order and warrant that led to this situation.
Not a fan of DA’s offices in general (they are the “evil twin” to my particular line of work after all), but realistically this one isn’t on them.
The prosecutors and judges around here are incredibly lenient for the worst crimes and anything involving reckless driving.
https://www.inforum.com/news/north-dakota/no-jail-time-for-m...
https://www.kvrr.com/2025/11/03/no-jail-time-for-man-accused...
https://apnews.com/general-news-fff59b609215476a9251edb91923...
https://www.valleynewslive.com/2021/05/21/moorhead-man-sente...
https://en.wikipedia.org/wiki/Ray_Holmberg
https://www.valleynewslive.com/2025/11/18/former-clay-county...
It is an AI error, but also an error on the part of the cops, the prosecutors, the judge, and the county sheriff (who is responsible for the jail inmates). I hope everyone involved in this travesty is sued into oblivion and unable to hide behind their immunity defenses. Facial recognition should never be the sole basis for a warrant.
> It is an AI error, but also an error on the part of the cops, the prosecutors, the judge, and the county sheriff
Yes, it's critical to remember that multiple parties can be at fault. In a case like this, it is true that
a) law enforcement misused a tool and demonstrated extreme negligence
b) the judiciary didn't catch this, which suggests systemic negligence there too when it comes to their oversight responsibilities
c) the company selling/providing this AI tool should have known it was likely to be misused and is responsible for damages caused by such predictable usage
We cannot have a just world until our laws and norms result in loss of jobs and legitimacy as punishment for this sort of normalized failure, from all three parties. Immunity is a failed experiment.
Even if she was a read ringer (clearly not the same person to any human who glances at the image), common sense should tell you that among 340,000,000 Americans there are a lot of lookalikes. Clearly there's a kind of stupid belief in the mystic powers of an AI and a callous disregard for the well being of suspects. No one should be dragged 1000 miles and held for months based on a facial match, especially when exculpatory evidence was easily available.
To be specific, and it is a lot of the reason why this 5 month delay happened, but she was not dragged then held, she was arrested, then held, then dragged. She was released 5 days after finally getting to Dakota, if they had actually gone and gotten her the hold would have been ~30 days plus the 5 prior to interview and charges dropped.
It isn’t much of a salve, but the particulars do matter when trying to assess fault to the proper parties (who are still clearly the Fargo cops in this particular tragedy).
This x1000. We need to suspend this shared fiction that AI has any agency. Only humans can be responsible. Full stop.
ICE detains innocent woman 1200 miles away based on AI
Same comment?
This question doesn't even make sense. Why wouldn't humans still be the ones responsible? Bot account?
Doesn't look like it. I've come across this account a few times now. Engages and makes reasonable comments excepts for certain politicized issues where he acts like an indoctrinated zealot.
respectfully, can you elaborate on why the answer would not be yes? or am i just misreading your comment?
> It is an AI error
The software identified the person as Angela Lipps. According to the court documents, the Fargo detective working the case then looked at Lipps' social media accounts and Tennessee driver's license photo.
In his charging document, the detective wrote that Lipps appeared to be the suspect based on facial features, body type and hairstyle and color.
The software worked exactly as intended. It's a filtering tool that sifts through data for common patterns to provide leads, not matches. It raises a flag on persons of interest. You can be a "match" anywhere between 0 and 100% and only relative to some specific input (like that picture taken from the top of the woman at the teller). In that sens mismatches are within acceptable parameters and have been known to happen.
A "match" is a pronouncement ultimately made by the humans that uses the tool, after they've checked out the leads. Someone slept at the wheel here.
Wow, so many failures of the legal system. While the incompetent/malicious/lazy investigators that used the facial recognition and only that are obviously at major fault, I'd actually put larger blame on the judge that signed the arrest warrant. They are supposed to be a check on such incompetent/malicious/lazy-ness not just a rubber stamp. Unfortunately there's really no recourse against incompetent/malicious/lazy judges.
Of course this would have been bad enough if this had happened where she lived but the holding for 5 months adds a whole 'nother level of insight into brokeness of the legal system. I'd be interested in hearing more about why that happened. Was it just a matter of that happens sometimes if you have a public defender?
This reminds me of the British Post Office Scandal: https://en.wikipedia.org/wiki/British_Post_Office_scandal
I followed the inquiry when it was ongoing — all of the depositions were live on YouTube. The level of both hubris and incompetence involved in that case was breathtaking.
If you can get your hands on it, I recommend the 4 episode BAFTA-winning mini-series about it: https://en.wikipedia.org/wiki/Mr_Bates_vs_The_Post_Office
Now I'm in a blind rage all over again.
Yeah but they haven’t made the Fargo police chief an honorary senator yet, which is basically what they did with Vennells.
John Bryant, aka The Civil Rights Lawyer, recently did a piece about a similar case of mistaken identity. The consequences weren't as severe, but the willingness to trust the AI over any other evidence was the same:
https://thecivilrightslawyer.com/2026/03/11/ai-software-tell...
In the video, it shows a police officer blindly trusting a casino's AI software, even when a cursory investigation should have given any reasonable person enough of a reason to question whether the man he arrested was the same man accused of a crime. (And then even after it was confirmed he was not, the prosecutor continued to charge him for trespassing!)
posting the video directly for those who prefer that format
https://www.youtube.com/watch?v=lPUBXN2Fd_E
as an aside how small the world is: I know-a-guy who knows-that-guy.
Me: Whoa, cool, my hometown is on atop Hacker News!
Also me, reading further: Uh-oh.
The chief of police also resigned today; wouldn't be shocked if this was part of the reasoning.
I am from a town that gets national news coverage only for Shenanigans like this.
> chief of police also resigned today
Source?
Googling "fargo police chief resigns": https://www.inforum.com/news/fargo/zibolski-announces-his-re... among other results.
That said, it's portrayed as a retirement, and doesn't seem to give any hints that it's connected.
Out of curiosity, was the guy known for being fast and loose with the rules? Put more simply, was he a good cop? Or did he have a history of being a rogue.
Are authoritarians good? That's basically what you are asking.
There are no good cops
lazy stupid pigs should be accountable for misusing AI like this and calling people into a system like that based on some AI's whim and a facebook peek, but having done no actual investigative work.
Lets see the pig that called for her arrest and wasted 4 months of her life spend 4 months in jail.
I really, really need folks to understand that deflecting blame away from the tool and trying to hold the human accountable feeds right into the marketing playbook of these companies in the first place.
The cops cannot be held accountable because the laws basically give them immunity. The politicians cannot be held accountable beyond being tossed out at the next election, because the laws otherwise give them immunity. The people operating the system cannot be held accountable, because the systems are marketed as authoritative despite being black boxes and lacking in transparency; they trusted the system just as they were told to, and thus cannot be held accountable.
And so when every human in the chain cannot be held accountable for these things, and the law prevents victims from receiving apologies, let alone recourse, then the tool and its maker is the only thing we can hold accountable. By deflecting blame away from the tools ("it wasn't AI, it was facial recognition"; "the human had to sign off on it"; "humans made the arrest, not machines"), you're protecting quite literally the only possible entity that could still potentially be held accountable: the dipshits making these stupid things and marketing them as superior and authoritative when compared to humans.
You want accountability? Start holding capital to account, and this shit falls away real fucking fast. Don't get lost in technical nuance over very real human issues.
I disagree. If you focus on holding the software creators to account in lieu of the humans in the loop, the we only reinforce the behavior of offloading thinking to the system.
If I am a cop in another jurisdiction and I see that in this case of error, the facial recognition company was held to account but not the police or municipality, I will be more likely to blindly trust the software assuming that they either patched it or will take responsibility.
We should demand accountability for both.
>Start holding capital to account
You forgot one: capital cannot be held accountable for making a tool used in a crime. It is a simple generalization of the Protection of Lawful Commerce in Arms Act (PLCAA), passed in 2005, which largely bars civil lawsuits against gun makers and sellers when their products are later used in crime.
Is there anything to suggest this sort of injustice isn't happening in low-tech all the time, constantly, all over the country, and the only reason it's getting attention here is because AI is involved?
Strongly agree here. This is an extremely predictable outcome of selling AI facial recognition software to American police forces.
“Computers don’t argue” seemed charmingly wrong about how computers work until a few short years ago.
https://nob.cs.ucdavis.edu/classes/ecs153-2019-04/readings/c...
This quote from a 1979 IBM training manual remains applicable:
“A computer can never be held accountable, therefore a computer must never make a management decision.”
(https://www.ibm.com/think/insights/ai-decision-making-where-...)
>Unable to pay her bills from jail, she lost her home, her car and even her dog. Fargo police say the bank fraud case is still under investigation and no arrests have been made.
I smell a lawsuit
Yes, Fargo will be lucky to survive the lawsuit.
The movie "Brazil" was right!
Mistake? Haha. We don't make mistakes.
https://www.youtube.com/watch?v=wzFmPFLIH5s
We do the work, you do the pleasure!
Except in "Brazil" it was a mechanical error in a deterministic machine caused by an invasive outside actor. It would be reasonable to trust that the autotypewriter/printer would faithfully output the correct text.
Modern AI seems incapable of any respectable amount of accuracy or precision. Trusting that to destroy somebody's life is even more farcical than the oppressive police in "Brazil".
>Except in "Brazil" it was a mechanical error in a deterministic machine caused by an invasive outside actor.
It was a literal bug in the computer. Metaphor as humor!
They do not care.
End qualified immunity and see how fast cops start to do their jobs with care.
Winning a lawsuit literally ends in your own community members (not the cops) paying the bill.
Gofundme? This woman needs some $$ and a lawyer. She may not know it yet, but if she makes some smart moves, she's about to be rich and Fargo is about to learn a very hard lesson.
This problem predates modern AI. https://en.wikipedia.org/wiki/Computer_says_no is built upon the deliberate abdication of responsibility to processes that cannot be held accountable. AI is just letting them do it at scale.
That doesn't mean we should accept it from AI. We should fight the blind yielding to the facade of authority regardless of whether the decision was made by an AI or an insect landing on a teleprinter at the wrong time.
Just reading the headline I said to myself: bet this is in America.
Every time I see something like this I can never quite believe this sort of stuff happens. Complete, life ruining incompetence, with no consequences for the idiots that caused this to happen. Ignoring the AI input, which to me has nothing to do with this (it was used as a tool to identify a potential suspect), this woman went to jail for 5 months on the opinion of someone with no other evidence. Only in America.
Indeed. Something like the Post Office Scandal would never happen anywhere but in the US.
I wish we saw more invocations of speedy trial rights. Trials MUST begin for felony charges in ND within 90 days of a defendant invoking those rights (must be invoked within 14 days of arraignment)[0].
[0] https://ndlegis.gov/cencode/t29c19.pdf
There are a bunch of ways they get people to sign away their right to speedy trials.
Defendants don't invoke that because in most states and federally, they build the case against you slowly over a long period of time before arrest, then stall as long as possible for discovery, then when they finally fulfill discovery they overwhelm you with a bunch of useless stuff so that it takes forever to get the useful information. Invoking right to speedy trial means the prosecution gets a very strong advantage to the defense.
I don't think it's a sensible interpretation of the constitution given the massive asymmetry of the situation. The state should be obligated without exception to either provide for a speedy trial or to release the defendant while the state figures its shit out. It should not be a right that can be waived. Meanwhile a defendant who's been arrested should generally be given as much time as he'd like to put together his defense.
There's an opportunity for an "AI" app here. Takes your photo, compares with mugshots on police databases, quotes you for requisite cosmetic surgery.
/i
Something big is missing from this story. How did face ID in ND pick up a matching little old grandma in TN that a TN judge would hold her without bail for 5 months?
Yeah, there is a whole lot more to this story.
It’s obvious from the one photo they posted of the actual suspect that the lady they arrested is about 20-30 years older than the woman in the bank photo. The woman in the photo is maybe 25-30 years old, this grandma looks like she’s 65-70 (actual age of 50).
Absolutely ridiculous, I hope she wins her civil case.
She will be enjoying a tidy compensation pay out. And the number better have seven numbers in it.
Facial recognition? looks at photo I've probably seen a dozen different people who look exactly like this woman just this week.
AI or not, it's unconscionable that victims of compulsory legal processes by way of mistaken identity are not made whole.
People will defend this, too, saying “well, she was eventually exonerated, right? So the system works!” Ignoring how she’ll never be fully reimbursed for the time, money, and grief of going through the system.
We also need to question how many people might go through the same process without eventual exoneration and how much going through this process costs individuals. Being falsely prosecuted comes usually imparts a permanent black mark in search results about the person (outside of places with sane laws like the EU) as well as causing stress or permanent injury.
Wrongly arrested individuals with mental disabilities have a history of physical abuse in jail potentially to the point of death.
Not to mention:
> Unable to pay her bills from jail, she lost her home, her car and even her dog.
If this is the system "working", then the system is broken.
> In all criminal prosecutions, the accused shall enjoy the right to a speedy and public trial
This is from the Sixth Amendment. Where the rubber hits the road is what “speedy” means.
Even in Idiocracy they didn't have this problem
This is a badly written story. It should explain if she saw a judge or had a lawyer.
You must have been reading something else, because this article includes all of that information.
> In Tennessee, she was given a court appointed lawyer for the extradition process. To fight the charges, she was told she would have to go to North Dakota.
> Officers from North Dakota did not pick up Lipps from her jail cell in Tennessee until Oct. 30 — 108 days after her arrest. The next day she made her first appearance in a North Dakota courtroom to fight the charges.
> "If the only thing you have is facial recognition, I might want to dig a little deeper," said Jay Greenwood, the lawyer representing Lipps in North Dakota.
Seems odd that the extradition process apparently doesn't require more than vibes.
I read the article and I don’t really understand… she was held in a jail in Tennessee but the article states they flew her to North Dakota? And somehow she’s a fugitive so that’s why she doesn’t get bail? but she’s a fugitive held in her own state in a holding facility? But then when they release her, she’s in North Dakota? So if some state says you’re a fugitive your home state will just hold you in jail until they come and put you on an airplane? Is that correct?
I think you have the interpretation correct. It seems like any state can say you're a fugitive from their state and now you have even fewer rights. Every day I learn some new fact about "justice" in the United States.
I believe each state has its own extradition process. In this scenario think of them more like the countries in the EU. Apparently Tennessee doesn't adequately protect its residents.
As a Tennessee resident I don't love learning that some dumb fuck state I want nothing to do with can call me a fugitive and my state will hold me prisoner without trial when said dumb fuck state finally decides is ready to deal with me.
I read it as her arrested and held in Tennessee temporarily then flown to North Dakota.
“Lipps would sit in that Tennessee jail cell for nearly four months. As a fugitive, she was held without bail”
Wait - what was the AI tool and how did it have her face to begin with? If small-town police are doing face-matching searches across national databases then nobody is safe because the number of false positives is going to be MASSIVE by sheer number of people being searched every day.
Pretend the tool is 99.999999% specific. If it searches every face in the USA you're still getting about 3 false positives PER SEARCH.
You will never have a criminal AI tool safe enough to apply at a national scale.
Probable cause? What's that?
Judge/magistrate who signed off on the arrest warrant fucked up.
It's not an AI error. It's a human error in mis-using AI in this way. Saying it's an AI error is like saying a hole in your drywall is a hammer error.
Unfortunately we'll probably see a trend of people using AI and then blaming AI for cases where they mis-used AI in roles it's not good for or failed to review or monitor the AI.
It's both. It's good to acknowledge that AI is easy to misuse in this manner but it doesn't detract from the fact that the ultimate responsibility lies in those that should be verifying the tool output.
There is far too little skepticism around the magic box that solves all problems which is causing issues like this. It's not the fault of the AI (as if it could be assigned liability) for being misused, but this kind of misuse is far too common right now so scare stories like this are helpful and we should highlight the use of AI in mistakes like this.
I worry that blaming AI at all actually incentivizes humans to offload things to AI that should not be offloaded, since it lets them escape blame.
That is a huge danger. Legally speaking it's not an issue since misusing a tool doesn't relieve liability (in most circumstances - all the trivial ones at least)... but that's a more significant political issue as evidenced by the Anthropic vs. DoD interactions since the DoD's actions are largely immune to oversight by the justice system.
Of course, that depends on sane non-politicized courts which you may rightfully doubt exist right now - but assuming the system works anywhere near as designed outsourcing a decision to AI wouldn't change liability.
For DC fans Harvey Dent would similarly not be free from liability for actions taken after a coin flip even if that coin could be viewed in a certain light to have the power to potentially force or prevent certain actions. An AI box that tells Harvey whether to shoot or spare would be similarly irrelevant to his liability - and a scenario in which Harvey points the gun at someone and then walks away giving the AI control over the trigger is essentially no different. Harvey in all cases is responsible for constructing the scenario that (potentially) leads to someone's death and, more over, even if the gun wasn't fired because the AI decided to spare the person Harvey would be on the hook for attempted murder.
We should probably stop telling the cops that this hammer is great for drywall.
How many more articles are we going to see with the headline AI facial recognition leads to innocent person jailed? A grandmother no less.
Some tech company illegally scanned people's photos on social media and now is using them with our complicit legal system to randomly put people behind bars. Now I need to worry that any day now due to a dice roll I will be sent away in a the middle of f'ing nowhere for months or years. Now the government wants to use these same dumb systems to make automated killing machines. FML!
I see a lot of comments trying to attribute blame to the cops, the lawyers, the police chief, the marshals, the tech bros, etc, but it is all of them and all of us that are guilty. We are so complicit in this sick system we live in. We are stuck in a collective action deadlock.
That fear you have in the back of your mind that says next time it might be you is counteracted by the thought "well thank goodness it wasn't me or a loved one," so you don't act. We are all doing this, that is why nothing changes.
The only people able to act these days are the most insane. The narcissistic corrupt power hungry politician, the psychopathic tech bro billionaire, and the jacobins are the only ones with the energy wade through this cesspool and that is why everything is so dystopian.
This is exactly what I would expect from the great state of ND.
https://archive.is/yCaVV - Archive link to get around the paywall.
https://www.theguardian.com/us-news/2026/mar/12/tennessee-gr... - Another article on this without a paywall.
It's annoying that both articles are calling this AI error. This was human error, the police did the wrong thing and the people of Fargo will end up paying for this fuckup.
I would argue it was both. No doubt this company was marketing it in a way to make it seem very reliable. And all of the procedural things afterwards made the error so much more damaging.
But imo this is why local police departments should not have access to this kind of tool. It is too powerful, and the statistical interpretation is too complicated for random North Dakota cops to use responsibly. Neither the company nor the PD have an incentive to be careful.
It's not an AI error. The face recognition AI simply said that it's a "potential match", which is correct. It's the humans' job to confirm that a potential match is in fact a match, especially when the suspect is 1,900 kms away.
They're slapping AI in the title of any article that vaguely relates to get more clicks. This unfortunately works extremely well (see this thread)
Happens with a lot of topics of interest.
Human police errors are so routine that they're not news worthy.
> https://archive.is/yCaVV
When I load this URL I get "One more step Please complete the security check to access" and I cannot get past the archive.is computational paywall.
But the guardian article actually has text! Thanks.
That's a common issue if you use cloudflare dns.
I live in Fargo. The police chief announced his retirement yesterday. Done by the end of the month. And then today this article comes out. So now we pretty much know why the sudden retirement announcement.
We are rapidly becoming a world where every person is one inscrutable LLM decision from having their life ruined with no recourse.
This type of incident isn't new and is only going to get worse. The problem is our governments are doing absolutely nothing about it. I'll give two examples:
1. Hertz implemented a system where they falsely reported cars as being stolen. People were arrested and went to jail for rental cars that were sitting in the Hertz lot. Hertz ultimately had to pay $168 million in a settlement [1]. That's insufficient. If I, as an ordinary citizen, make a false police report that somebody stole my car I can be criminally charged. And rightly so. People should go to jail for this and it will continue until they do. These fines and settlements are just the cost of doing business; and
2. The UK government contracted Fujitsu to produce a new system for their post offices. That system was allowed to produce criminal charges for fraud that were completely false. People committed suicide over this. This went on for what? A decade or more? But resuted in a parliamentary inquiry and settlements. It's known as the British Post Office scandal [2]. Again, people should go to jail for this.
The choice we as a society face is whether to have automation improve all of our lives by raising everyone's standard of living and allowing us to do less work and less menial work or do we allow automation to further suppress wages so the Epstein class can be slightly more wealthy.
[1]: https://www.npr.org/2022/12/06/1140998674/hertz-false-accusa...
[2]: https://en.wikipedia.org/wiki/British_Post_Office_scandal
I'm banned from Amazon KDP publishing for life because a fraud detection bot hallucinated that my e-book was plagiarizing my paperback (it didn't realize they're the same book). A bunch of email appeals that I'm pretty sure were also bots went nowhere. With each appeal, the reasons for my ban got progressively more vague, until they didn't mention the plagiarism part at all, just something nonsensical about creating a negative customer experience. Evil company.
> The problem is our governments are doing absolutely nothing about it
Huh. I thought they were actively accelerating the process. Hoping you are right and I am wrong.
What’s remarkable to me, beyond the total incompetence and stupidity of all the police people involved, is how incredibly aggressive the intervention was.
This is bank fraud case, for god’s sake, not an armed robbery. I don’t know the scale of it, but still, no one said she was a danger to anyone. She was a suspect, not a convict, and she was held at gunpoint while babysitting young children. What in the fucking world?
The US is so fucked up lately. People should chill the fuck out.
Completely infuriating, but more of a commentary on the sad state of incompetent power-hungry law enforcement with tools they don't know how to use than the tools themselves.
Though, the question remains: are the tools built in such a way as to deceive the user into a false sense of trust or certainty?
_Some_ of the blame lies on the UX here. It must.
It must land as human's fault or this will become more and more of a pattern to avoid accountability.
It’s both.
The cops need to be held accountable.
But it’s glaringly obvious that if you build tools like this and give them to the US police this is the outcome you will get. The toolmakers deserve blame too.
> are the tools built in such a way as to deceive the user into a false sense of trust or certainty? _Some_ of the blame lies on the UX here. It must.
Are AI code assist tools built in such a way as to deceive the user into a false sense of trust or certainty? Very much so (even if that isn't a primary objective).
Does any part of the blame lie on the UX if a dev submits a bad change? No, none.
You are ultimately, solely responsible for your work output, regardless of which tool you choose to use. If using your tool wrong means you make someone homeless, car-less, and also you kill their dog, then you should be a lot more cautious and perform a lot more verification than the average senior engineer.
I agree with all that. Maybe the word isn't "blame," then. Surely there must be some code, perhaps moral or ethical, but ideally more rigorously enforcible, which ought to prevent the development of intentionally deceiving tools. Sure you could say this about all software, but that which can cause actual physical harm ought to be held to a higher standard.
Yes, unfortunately technology is advancing faster than the average human brain evolves more neurons, so it will only become less comprehensible to the average person.
That's setting aside the tendency for police to hire from the left side of the bell curve to avoid independent thinkers that might question authority, refuse to do bad shit, etc.
> they don't know how to use than the tools themselves.
No, the tools work perfectly as they were design to work. The problem is that the tools are flawed.
Ultimately, every single of these decisions should be approved by a human, which should be responsible for the fuck up no matter what the consequences are.
> _Some_ of the blame lies on the UX here. It must.
No, the blame lies with the person or the group who approve the usage of these tools, without understanding their shortcomings.
>> are the tools built in such a way as to deceive the user into a false sense of trust or certainty? _Some_ of the blame lies on the UX here. It must.
> No, the blame lies with the person or the group who approve the usage of these tools, without understanding their shortcomings.
The person who approved the tools might've understood, but that doesn't mean the user understands. _Some_ of the reason why the user doesn't understand the shortcomings of the tool might be because of misleading UX.
I miss the days of earlier AI image-recognition software that would emit a confidence percentage.
New LLM-related AIs are all supremely confident in every assertion, no matter how wrong.
I don’t know what tool they used, but it was very likely not an LLM. They probably have some database of drivers’ licenses and they ran a similarity search against the surveillance footage. This poor lady happened to be the top match.
Even if it also output a score, that score depends on how the model was trained. And the cops might ignore it anyways.
Spoken like someone who isn’t built for a sales role at said company.
Sales will sell the dream, who cares if the real world outcomes don’t align?
America’s repulsive classism at work to be indifferent to her rights like this
Theres alot of talk about how the cops just misused the tool and its their fault not the AIs
Thats missing the point here. The point is that these tools provide crazy leverage, and thay can be good or bad. If used carefully it can definitely catch criminals faster, but when misused (or abused) they can let the authorities unjustly ruin lives faster
The question isnt whether AI is perfect or not. Its whether you trust the authorities with it. To use and abuse as they can. Think about the average cop. Think about the way Trump treats people. Think about the way israel keeps an ongoing genocide going. Think about the cases of police brutality that happen in the US, the cases of racial profiling. Think about ICE and their behavior, going around kidnapping and killing people. Do you want these people to have more leverage?
I hate this headline (not blaming submitter). Police incompetence and negligence jailed her for months and left her stranded in a North Dakota winter. The AI is no more responsible than the cars and airplanes they used.
Edit: this is in reference to the original headline "AI error jails innocent grandmother for months in North Dakota fraud case" not the revised title that it was changed to.
I disagree. Clearly the police felt the AI was "responsible enough" to be the only thing they needed to trust.
The AI made the call and humans licked its butthole
And that is a complete failure of the police and authorities. They made the decision to extradite her with such flimsy evidence.
If it didn't erase accountability, how would it create any value?
Many people are treating this as a matter of philosophy, which it isn't.
At a primitive, physiological level if you delegate to AI and most of the time you don't get in trouble for it, the resulting relationship you have with the AI could only be called "trust".
If you're expected to be 40% more productive at your job, your employer is making it crystal clear that you will trust the AI or you will be fired. Even if nobody ever said it, the sales pitch is that AI does the work and people are mostly there to be their servants whose role is to keep them fed with decisions we want made but don't want to be responsible for making.
The value is creates is obvious: finding a needle in a haystack. Is accountability laundering another potential benefit? Sure. Can we stop pretending we don't understand understand the other side of it? Cynicism is nice and all but after a certain point it eventually wraps around and makes us look naive.
Even if she was guilty, they shouldn't have imprisoned her for 3+ months without interviewing her. The AI didn't tell them to do that.
And the police were wrong, which is why they're the culpable ones.
I think you actually agree with the GP? As I understand them, they're saying that it's not the AI tool that takes the most blame, it's the police.
Even if the id was correct, why would they leave her in jail for 5 months before the first interview and/or court appearance?
No indication that the licking was consentual.
> Clearly the police felt the AI was "responsible enough" to be the only thing they needed to trust.
Yes, that's what the OPs "incompetence and negligence" referred to.
A jury will probably decide the AI company's level of responsibility at trial. It is an open question til then!
Your picking apart the words doesn't matter if police are more incompetent with AI than without it. AI being the catalyst to a worse society is a more interesting and worthwhile topic than whether "AI is responsible" is the right way to phrase it.
If you make the AI software, then your software malfunctioned.
If the laser printer screws up a page in the middle of the document, and the user doesn't catch it and includes it in the board of directors binder, the laser printer still malfunctioned.
Brave police officers wanted to show us all the dangers of AI slop.
I posted this 9 hours ago. Can I get the karma transferred to my account?
As much as we try to reward the first person to submit the story, we also have to give credit to the person who submits the best URL and the best version of the story. It looks like your submission was killed due to being an archive.is link, which is not allowed as a URL for a submission (we need the canonical URL submitted to prevent people from using archive services or shorteners to mask domains that may be malicious).
Sometimes it's just a matter of luck as to who gets the submission right and gets the karma. Sorry it wasn't you this time, but keep submitting good stuff and you'll get your turn.
I don't like the local newspaper and posted the archive link so they wouldn't get the clicks. I didn't know that wasn't allowed. Thanks for the info
Yep, totally understand. It takes a lot of trial and error to know all the ways of HN.
Why the fuck does a newspaper need a ‘notifications’ icon in the top right hand corner?
Because it has an updating-feed-like structure, in which new items can appear.
Knowing that there are (N) new items is so useful (to some people), that as far back as the 1990s, we developed technology called "RSS" to give you this superpower over a website that doesn't provide anything of the sort. One that simply updates with new stuff when you hit refresh, with no UI to indicate what is new/changed.
How else can they report on BREAKING NEWS if it doesn't at least break your concentration?