The US Justice system is a big beast, with 350,000 cases passing through the courts each year. There are roughly 6.7 million adults under some form of criminal justice supervision, which amounts to 716 inmates for every 100,000 citizens. Of these, 2.2 million are in local jails or state and federal prisons. The rest are either on probation or parole. According to a 2012 Vera Institute of Justice study, the number of those incarcerated has increased by more than 700% over the last four decades.
This situation is untenable. It is expensive and there is little evidence that incarceration actually reduces crime. Indeed, it is estimated that 70% of inmates have been imprisoned before. The level of crime in the US is similar to that of other stable industrialized nations, yet it costs $74 billion a year to run - eclipsing the GDP of 133 nations.
The reasons for the US’s high level of incarceration are complex and there is no simple solution. The problem is, however, at least now being recognized, albeit slowly. Incarceration rates have dipped slightly in recent years, primarily due to the release of thousands of nonviolent drug offenders from federal prison systems in 2015. A number of states, including California, have enacted legislation and policies to cut prison populations, retroactively reducing some drug and property crimes from felonies to misdemeanors, offering expanded substance abuse treatment programs, and increasing investment on re-entry programs.
Another solution being touted is machine learning and predictive analytics during the sentencing process. We have seen police forces use such technology to predict crime hot spots and target potential recidivists already, but using it to influence sentencing is new. Here, machine learning algorithms are applied to data to understand the characteristics of those likely to offend again and the degree to which the convicted party exhibits them too. It removes - or so the thinking goes - human bias from the equation, which should make the US court system both fairer and more effective. However, there are a number of problems with this.
Firstly, if the explanation I’ve given sounds vague, that’s because nobody outside the companies know how they reach the conclusions they do. While this might be acceptable in marketing, the same standards cannot apply when a human being’s freedom is on the line. In February of 2013, Eric Loomis was arrested while driving a car that had been used in a shooting. He pled guilty to eluding an officer and no contest to operating a vehicle without its owner’s consent. A judge rejected a plea deal and sentenced Loomis to a harsher punishment, citing a data-driven risk assessment from Northpointe called COMPAS as part of the reason. He told him that, ‘You’re identified, through the COMPAS assessment, as an individual who is a high risk to the community.’
Loomis appealed the sentence, arguing that neither he nor the judge could examine the formula for the risk assessment as it was a trade secret. The state of Wisconsin countered that Northpointe required it to keep the algorithms confidential in order to protect the firm’s intellectual property. Wisconsin’s attorney general, Brad D. Schimel, even used the same argument that Loomis did, that judges do not have access to the algorithm either, although he seems to have spun it as a positive somehow. This is a bit like saying a game of chess is fairer if neither player knows the rules. Which is true, in a way, but it’s unlikely to produce a game of chess, just two people throwing pieces round a board, which will result in no winners in the traditional sense.