This is the really the scary part of automation: the number used to categorize inmates for treatment and rehabilitation plans is now misapplied to possibly keep them in prison longer than necessary. The number,combined with a word like "risk", can be twisted into whatever anyone wants.
It doesn't really change things that much if the system was already corrupted. Human boards can just as easily state "risk" or anything else in their reviews to deny parole. Algorithms can make corruption and rights-denial more efficient and this can then be used to replace any remaining human boards that were genuinely trying to do a good job in determining fit for parole. Its actually pretty dystopian when I put it that way though.
I don't disagree, but I think implementing this sort of review with software is dangerous in a different way than making bad behavior more efficient.
A parole board can have dissent and differing personal experience in its ranks. There can be disagreement, but that discussion might lead to a more right or more wrong outcome.
When you have the number spit out from some case management, there's no arguing with it. We just turned the programmers that wrote that bit of code into unassailable gods of the process, with whom we can only communicate with through Jira tickets. It isn't more right or more wrong, but it is consistent with the variables provided. That mindless consistency applied to human situations like this quickly becomes horrific.
Yes, you hit the nail on the head. This kind of behavior is bad because of the way it shifts accountability. Previously, a human could ultimately be held responsible. Now, neither machine nor human can be held responsible for abhorrent behavior
” When you have a new, abusive technology, you can't just aim it at rich, powerful people, because when they complain, they get results. To successfully deploy that abusive tech, you need to work your way up the privilege gradient, starting with people with no power, like prisoners, refugees, and mental patients. This starts the process of normalization, even as it sands down some of the technology's rough edges against their tender bodies. Once that's done, you can move on to people with more social power – immigrants, blue collar workers, school children. Step by step, you normalize and smooth out the abusive tech, until you can apply it to everyone – even rich and powerful people. Think of the deployment of CCTV, facial recognition, location tracking, and web surveillance.”
Cory Doctorow - The future of Amazon coders is the present of Amazon warehouse workers
> no power, like prisoners, refugees, and mental patients. This starts the process of normalization,
Omits the most important one: Children. Will children raised on computers with schoolboard spyware on it, testing applications that monitor your face, etc. flinch when the state attempts to employ similar on them as adults? Will they shy away from these tools themselves when they take the reigns of power?
And you can invert the formula, solve it for free labor workforce needed and then derive risk assessment parameters thst give you that outcome. My guess is thar during a recession where demand for prison labour ("slavery with less extra steps") shrinks, the risk expected parameters sink accordingly. Which makes the evaluation parameter just another ticker.
He was sentenced as a fourth felony habitual offender having had 5 previous convictions for cocaine possession and a previous conviction for possession with intent to distribute.
In addition, an appeals court found the 20 year sentence illegally lenient (he should have been sentenced to 30 years) but because neither the state nor the defendant brought it up, declined to correct this.
I’m. Sorry that you feel that way. We applied an algorithm and since 40 of the other 100 people who share that opinion with you committed drug related crimes we have decided to put you in jail without chance of parole. I hope you understand that while this may seem unfair this decision is not good or bad, it’s just the outcome of the algorithm saved in a computer.
1) what are my rights, I.e. Against what constraints was the algorithm implemented?
2) "we have decided": who is "we", it's actually good that a particular entity is responsible for the output of the algorithm, computers can't be responsible for things.
3) is there appeal or human review?
If a bad algorithm is part of some process that affects me I'm going to ask what my rights are, who's responsible and where I can appeal, not the specifics of the algorithm.
Your original argument was that an algorithm cannot be bad, and now you are countering my point by arguing that the algorithm is bad if it ignores your rights, is solely responsible for the implemented outcome and doesn’t include any input for human review or appeal.
Funny enough I read not that long ago about a case where an algorithm broke all those three points! Can’t remember where of the top of my mind, I think it was the Louisiana Prison board or something like that.
But never mind that, you are arguing against yourself so I’ll kindly step aside and let you at you.
I think the problem here isn't so much that there is a formal process... but that (A) the person affected has no agency and (B) it has been implemented in such a way that nobody will notice or care when injustice-bugs occur.
You mean, it's been implemented in such a way that no politician can be blamed when things inevitably go wrong due to what this really is: savings on the budget of one organization, pushing the costs somewhere else as opposed to actually improving society/economy.
I've always wondered this about the many algorithms that keep getting used. They're being kept out of democratic decision making. It's always a committee somewhere in the executive that decides on the algorithm, not parliament, or any kind of elected board. They are implemented as a cost control.
No Sir. Algos are bad. they inherently carry a huge risk to accentuate the biases of the datasets from which algos were created. I always get mad when people suggest algos are neutral or objective. they are neither.
They are often just tools to perpetuate inequities in plain site.
Not to say that the journalists and editors involved with putting out clickbait like this don't deserve to be harangued (they do), but:
> An algorithm is just a method saved in a computer
No it isn't "just" that. It's quite a bit more specific than that.
For a sequence of steps to be an algorithm, it has to:
1. provably terminate, with—
2. the correct answer
(Every time, that is.)
Recommender systems and other programs created to help with things that have no correct answer belong to a class of procedures that are the exact opposite of what the word "algorithm" is supposed to communicate.
Anyone who has gotten paid to write something that involved the word "algorithm" in the last 10 years should give serious consideration to jumping off a bridge.
Those aren't algorithms, either. "Monte Carlo algorithm" is just the product of sloppy language use from people involved with the field instead of outsiders.
This is the really the scary part of automation: the number used to categorize inmates for treatment and rehabilitation plans is now misapplied to possibly keep them in prison longer than necessary. The number,combined with a word like "risk", can be twisted into whatever anyone wants.
do people not read kafka in high school anymore?
https://en.wikipedia.org/wiki/The_Trial
did people not understand the historical context of his work when they did read him?
Is this what the movie Brazil is based on?
It doesn't really change things that much if the system was already corrupted. Human boards can just as easily state "risk" or anything else in their reviews to deny parole. Algorithms can make corruption and rights-denial more efficient and this can then be used to replace any remaining human boards that were genuinely trying to do a good job in determining fit for parole. Its actually pretty dystopian when I put it that way though.
I don't disagree, but I think implementing this sort of review with software is dangerous in a different way than making bad behavior more efficient.
A parole board can have dissent and differing personal experience in its ranks. There can be disagreement, but that discussion might lead to a more right or more wrong outcome.
When you have the number spit out from some case management, there's no arguing with it. We just turned the programmers that wrote that bit of code into unassailable gods of the process, with whom we can only communicate with through Jira tickets. It isn't more right or more wrong, but it is consistent with the variables provided. That mindless consistency applied to human situations like this quickly becomes horrific.
Yes, you hit the nail on the head. This kind of behavior is bad because of the way it shifts accountability. Previously, a human could ultimately be held responsible. Now, neither machine nor human can be held responsible for abhorrent behavior
” When you have a new, abusive technology, you can't just aim it at rich, powerful people, because when they complain, they get results. To successfully deploy that abusive tech, you need to work your way up the privilege gradient, starting with people with no power, like prisoners, refugees, and mental patients. This starts the process of normalization, even as it sands down some of the technology's rough edges against their tender bodies. Once that's done, you can move on to people with more social power – immigrants, blue collar workers, school children. Step by step, you normalize and smooth out the abusive tech, until you can apply it to everyone – even rich and powerful people. Think of the deployment of CCTV, facial recognition, location tracking, and web surveillance.”
Cory Doctorow - The future of Amazon coders is the present of Amazon warehouse workers
https://pluralistic.net/2025/03/13/electronic-whipping/#your...
> no power, like prisoners, refugees, and mental patients. This starts the process of normalization,
Omits the most important one: Children. Will children raised on computers with schoolboard spyware on it, testing applications that monitor your face, etc. flinch when the state attempts to employ similar on them as adults? Will they shy away from these tools themselves when they take the reigns of power?
It's in the quote.
"Once that's done, you can move on to people with more social power – immigrants, blue collar workers, school children."
And you can invert the formula, solve it for free labor workforce needed and then derive risk assessment parameters thst give you that outcome. My guess is thar during a recession where demand for prison labour ("slavery with less extra steps") shrinks, the risk expected parameters sink accordingly. Which makes the evaluation parameter just another ticker.
If you can use an algorithm, and you do not have to share the logic, you might as well just make the algorithm return "denied" every time.
https://law.justia.com/cases/louisiana/first-circuit-court-o...
He was sentenced as a fourth felony habitual offender having had 5 previous convictions for cocaine possession and a previous conviction for possession with intent to distribute.
In addition, an appeals court found the 20 year sentence illegally lenient (he should have been sentenced to 30 years) but because neither the state nor the defendant brought it up, declined to correct this.
Abolish prisons.
What should we do with people who murder or repeatedly steal?
An algorithm is just a method saved in a computer. It is neither good nor bad, or rather it could be good or bad depending on how well it works.
You’re going to need to do better than that for this crowd.
I’m. Sorry that you feel that way. We applied an algorithm and since 40 of the other 100 people who share that opinion with you committed drug related crimes we have decided to put you in jail without chance of parole. I hope you understand that while this may seem unfair this decision is not good or bad, it’s just the outcome of the algorithm saved in a computer.
The problem in your example is threefold:
1) what are my rights, I.e. Against what constraints was the algorithm implemented?
2) "we have decided": who is "we", it's actually good that a particular entity is responsible for the output of the algorithm, computers can't be responsible for things.
3) is there appeal or human review?
If a bad algorithm is part of some process that affects me I'm going to ask what my rights are, who's responsible and where I can appeal, not the specifics of the algorithm.
Your original argument was that an algorithm cannot be bad, and now you are countering my point by arguing that the algorithm is bad if it ignores your rights, is solely responsible for the implemented outcome and doesn’t include any input for human review or appeal.
Funny enough I read not that long ago about a case where an algorithm broke all those three points! Can’t remember where of the top of my mind, I think it was the Louisiana Prison board or something like that.
But never mind that, you are arguing against yourself so I’ll kindly step aside and let you at you.
> But never mind that, you are arguing against yourself so I’ll kindly step aside and let you at you.
They are not OP. Even if they are, why not just engage with the points they raise?
Thank you for creating a textbook strawman argument to share
An algorithm does not require a computer.
A set of rules followed by anything (including humans) is an algorithm.
Stories like this blame computers because that's easier on everyone's reputations than blaming the people who made the rules.
I think the problem here isn't so much that there is a formal process... but that (A) the person affected has no agency and (B) it has been implemented in such a way that nobody will notice or care when injustice-bugs occur.
You mean, it's been implemented in such a way that no politician can be blamed when things inevitably go wrong due to what this really is: savings on the budget of one organization, pushing the costs somewhere else as opposed to actually improving society/economy.
I've always wondered this about the many algorithms that keep getting used. They're being kept out of democratic decision making. It's always a committee somewhere in the executive that decides on the algorithm, not parliament, or any kind of elected board. They are implemented as a cost control.
Eh. This crowd isn't where this article is aimed. Remember the British minister who asked when Microsoft was going to 'get rid' of algorithms?
https://www.windowscentral.com/british-government-reported-a... (previously discussed here; https://news.ycombinator.com/item?id=30736887)
No Sir. Algos are bad. they inherently carry a huge risk to accentuate the biases of the datasets from which algos were created. I always get mad when people suggest algos are neutral or objective. they are neither.
They are often just tools to perpetuate inequities in plain site.
> Algos are bad. they inherently carry a huge risk to accentuate the biases of the datasets from which algos were created.
They do not do this “inherently.” They don’t do it at all if they don’t have or create a feedback mechanism.
On the contrary algorithms should be relying on feedback and adapt if the underlying assumptions do not hold anymore.
Problem is we have not automated the process of identifying and updating the assumptions for a given problem.
Not to say that the journalists and editors involved with putting out clickbait like this don't deserve to be harangued (they do), but:
> An algorithm is just a method saved in a computer
No it isn't "just" that. It's quite a bit more specific than that.
For a sequence of steps to be an algorithm, it has to:
1. provably terminate, with—
2. the correct answer
(Every time, that is.)
Recommender systems and other programs created to help with things that have no correct answer belong to a class of procedures that are the exact opposite of what the word "algorithm" is supposed to communicate.
Anyone who has gotten paid to write something that involved the word "algorithm" in the last 10 years should give serious consideration to jumping off a bridge.
Well, no. A Monte Carlo algorithm is still an algorithm.
Those aren't algorithms, either. "Monte Carlo algorithm" is just the product of sloppy language use from people involved with the field instead of outsiders.
Do you think probabilistic algorithms are not "real" algorithms? I guess this page is total bullshit then: https://wikipedia.org/wiki/Randomized_algorithm
Newton's Method?
Exactly. Not an algorithm.