With the entry of algorithms and AI-assisted automation in the American prison system – which houses the world’s largest population of prisoners – questions have arisen with regards to the biases that may be present in these applications. And the one that has the most eyebrows raised is the criminal risk assessment algorithms for the ‘recidivism’ score.
The recidivism score refers to the likelihood of an individual to reoffend. The higher the score, the more likely it is that an individual will commit an undesirable action again, according to the system. This information is then passed on to the judge who then takes that into account to determine the quantum of punishment, chances of bail etc (source).
Though the stated goal of this program is to reduce inefficiency and ensure speedy trials, the algorithm is trained on historical criminal data and records i.e drawing from data in which certain populations are disproportionately represented due to various historical and social reasons and therefore the algorithms are trained on the ‘what’ of data but not the ‘why’.
Fears are that the bias explained above then leads to a supposedly ‘unbiased’ algorithm (as opposed to a judge, open to human bias) that will perpetuate the biases already present in the justice and law enforcement system. Civil rights organizations are beginning to raise the matter in various circles and have sparked off an important debate.