Saturday, August 6, 2016

High-Tech and the courtroom series: Part Three; Algorithms and sentencing; Making Algorithms used in the sentencing process accountable: ProPublica research study comes up with some disturbing results..."In 2014, Eric H. Holder Jr., then the attorney general, called for the United States Sentencing Commission to study whether risk assessments used in sentencing were reinforcing unjust disparities in the criminal justice system. No study was done. Even Wisconsin, which has been using risk assessment scores in sentencing for four years, has not independently tested whether it works or whether it is biased against certain groups. At ProPublica, we obtained more than 7,000 risk scores assigned by the company Northpointe, whose tool is used in Wisconsin, and compared predicted recidivism to actual recidivism. We found the scores were wrong 40 percent of the time and were biased against black defendants, who were falsely labeled future criminals at almost twice the rate of white defendants. (Northpointe disputed our analysis. Read our response.) There’s software used across the country to predict future criminals. And it’s biased against blacks." Thanks to The Marshall Project;


STORY: "Making Algorithms Accountable," by Julia Angwin, published by ProPublica on August 1, 2014; (Julia Angwin is a senior reporter at ProPublica. From 2000 to 2013, she was a reporter at The Wall Street Journal, where she led a privacy investigative team that was a finalist for a Pulitzer Prize in Explanatory Reporting in 2011 and won a Gerald Loeb Award in 2010.)...Thanks to The Marshall Project for drawing this story to our attention. HL);

SUB-HEADING:  "As algorithms control more aspects of our lives, we need to be able to challenge them."

SUB-HEADING: "We’re investigating algorithmic injustice and the formulas that increasingly influence our lives."

SUB-HEADING: "Gregory Lugo crashed his Lincoln Navigator into a Toyota Camry while drunk. An algorithm rated him as a low risk of reoffending despite the fact that it was at least his fourth DUI."  (Drunk under influence);

GIST: "Algorithms are ubiquitous in our lives. They map out the best route to our destination and help us find new music based on what we listen to now. But they are also being employed to inform fundamental decisions about our lives. Companies use them to sort through stacks of résumés from job seekers. Credit agencies use them to determine our credit scores. And the criminal justice system is increasingly using algorithms to predict a defendant’s future criminality. Those computer-generated criminal “risk scores” were at the center of a recent Wisconsin Supreme Court decision that set the first significant limits on the use of risk algorithms in sentencing. The court ruled that while judges could use these risk scores, the scores could not be a “determinative” factor in whether a defendant was jailed or placed on probation. And, most important, the court stipulated that a presentence report submitted to the judge must include a warning about the limits of the algorithm’s accuracy. This warning requirement is an important milestone in the debate over how our data-driven society should hold decision-making software accountable. But advocates for big data due process argue that much more must be done to assure the appropriateness and accuracy of algorithm results. An algorithm is a procedure or set of instructions often used by a computer to solve a problem. Many algorithms are secret. “We urgently need more due process with the algorithmic systems influencing our lives,” says Kate Crawford, a principal researcher at Microsoft Research who has called for big data due process requirements. “If you are given a score that jeopardizes your ability to get a job, housing or education, you should have the right to see that data, know how it was generated, and be able to correct errors and contest the decision.”...But algorithmic auditing is not yet common. In 2014, Eric H. Holder Jr., then the attorney general, called for the United States Sentencing Commission to study whether risk assessments used in sentencing were reinforcing unjust disparities in the criminal justice system. No study was done. Even Wisconsin, which has been using risk assessment scores in sentencing for four years, has not independently tested whether it works or whether it is biased against certain groups. At ProPublica, we obtained more than 7,000 risk scores assigned by the company Northpointe, whose tool is used in Wisconsin, and compared predicted recidivism to actual recidivism. We found the scores were wrong 40 percent of the time and were biased against black defendants, who were falsely labeled future criminals at almost twice the rate of white defendants. (Northpointe disputed our analysis. Read our response.) There’s software used across the country to predict future criminals. And it’s biased against blacks. Read the story. Some have argued that these failure rates are still better than the human biases of individual judges, although there is no data on judges with which to compare. But even if that were the case, are we willing to accept an algorithm with such a high failure rate for black defendants? Warning labels are not a bad start toward answering that question. Judges may be cautious of risk scores that are accompanied by a statement that the score has been found to overpredict recidivism among black defendants. Yet as we rapidly enter the era of automated decision making, we should demand more than warning labels. A better goal would be to try to at least meet, if not exceed, the accountability standard set by a president not otherwise known for his commitment to transparency, Richard Nixon: the right to examine and challenge the data used to make algorithmic decisions about us."





The entire story can be found at:


https://www.propublica.org/article/making-algorithms-accountable
 
PUBLISHER'S NOTE:

I have added a search box for content in this blog which now encompasses several thousand posts. The search box is located  near the bottom of the screen just above the list of links. I am confident that this powerful search tool provided by "Blogger" will help our readers and myself get more out of the site. 


The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic" section which focuses on recent stories related to Dr. Charles Smith. It can be found at:

http://www.thestar.com/topic/charlessmith

Information on "The Charles Smith Blog Award"- and its nomination process - can be found at: http://smithforensic.blogspot.com/2011/05/charles-smith-blog-award-nominations.html

Please send any comments or information on other cases and issues of interest to the readers of this blog to: 

hlevy15@gmail.com;

Harold Levy;

Publisher: The Charles Smith Blog;