8.05.2018 || RESEARCH AND SOURCE EVALUATION 10

Algorithmic decision making and the cost of fairness

https://arxiv.org/pdf/1701.08230.pdf

Accessed 19/06/2018

Algorithms are now increasingly being used for pretrial release decisions. An example of an algorithm that is used in this way is COMPAS which gives people scores on how likely they are to commit a crime – the level or ‘risk’ they pose, using over 100 factors. However, amongst those who did not commit a crime, black people were twice as likely to be given a high score thus being labelled as risky, resulting in the proposal of fairer decision-making algorithms. In order to correct for this stark racial disparity, this document proposes a way in which to make these recidivism algorithms fairer.

I found a large portion of this document involved maths beyond the level that I am capable of understanding, though I understood the general thrust of what the algorithm was being manipulated to do. Additionally, I found it to be largely irrelevant to my hypothesis, as it established that there was unfairness in the judicial system due to algorithmic decision-making and then went on to discuss how it can be solved via mathematical models. So although it is a good example of algorithms being used unfairly in the present day, it did not provide me with any further new useful insight.

 

Machine Bias

https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Bias detectives: the researchers striving to make algorithms fair

https://www.nature.com/articles/d41586-018-05469-3

I thought it would be useful to find out whether the use of algorithms in the legal landscape was common, following my reading of the source above. Sure enough, I found that ‘risk assessments’ are commonplace in the US, and in some states are even given to judges during sentencing. The algorithms used were found to be significantly unreliable, with only 20% of those predicted to commit crimes going on to do so – and racial disparities were also uncovered, with black defendants being more likely to be flagged up wrongly.

However, upon further research about the use of this software, I discovered that this type of disparity occurs not because data about race is explicitly fed into the model, but because the algorithm associates other input data (such as address) with likelihood of committing a crime. This results in more false positives within a certain group but does not occur because of a bias entrenched within the algorithm. In fact, the article from Nature magazine argues that the imbalance may occur due to an imbalance in arrests in the first place, which would mean this ‘pre-existing societal bias’ is then reinforced. However due to the nature of the algorithm, it is impossible to achieve perfect ‘predictive parity’ – there must be a trade-off. According to research done by a couple of professors, risk scores could be ‘equally predictive or equally wrong’, and the reason for this comes down to the difference in the base rates and the difference in the number of crimes in each population, which points to an issue in the policing itself. It follows by the saying in computer science – GIGO – if you put inadequate data in, the results will be no better. Therefore if AI is fed data which includes inherent human biases, then so will the results it outputs.

With regards to this particular issue, I think it would be sensible to conclude that the algorithm does not create inequality, but helps to perpetuate existing inequality. The solution appears to lie in tackling the inequality in policing. Regardless, I think this research into the use of COMPAS in the US is an interesting case which shows that it is not the algorithms themselves that are at fault. TA Furthermore, being aware of this issue ensures that steps can be taken to adjust for the bias which is better than if it was used without investigation. I think this will make an interesting example for discussion in my project, especially with the sources being fairly recent, and having read about COMPAS from multiple perspectives to get as broad an idea as to how it works as possible.

 

‘Automating Inequality’: Algorithms In Public Services Often Fail The Most Vulnerable

https://www.npr.org/sections/alltechconsidered/2018/02/19/586387119/automating-inequality-algorithms-in-public-services-often-fail-the-most-vulnerab?t=1529693037035

This article begins with the a powerful story about how Omega Young’s welfare payments were cut off due to an automated system used to determine welfare eligibility. This meant she was unable to afford her medication or rent, and died before the case was reviewed. Though the system is no longer used, this is yet another example of how significantly dependence on automated systems can take away lives. According to the author of ‘Automating Inequality’, big data systems do not remove bias but ‘simply move it’. Though I am still keen to explore the impact of algorithmic decision-making within the UK, the three stories in this article about how algorithms in public services have damaged people’s lives, I would like to further explore these issues within the US and further look at Virginia Eubanks’ work which directly links to my hypothesis.

 

Rise of the racist robots – how AI is learning all our worst impulses

https://www.theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses

In search of the uses of big data within the UK, I came across this very recently-written article. It describes how there is hope for machine learning to be used to help with employment centres, pension funds and sorting through revenue and customs casework.

The article highlights how essential it is that the programs must be free of human biases, otherwise all we are doing is automating our biases rather than eliminating them. Joanna Bryson sums this up perfectly, “People expected AI to be unbiased; that’s just wrong. If the underlying data reflects stereotypes, or if you train AI from human culture, you will find these things”.

I found this piece to be more of a repeat of everything I have researched already than anything else, as it refers to examples such as COMPAS that I have already looked at so I found it was not especially useful.

 

Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor

This book was published only a few months ago and provides many useful links to my research. The information provided is up-to-date and well-researched and I found it to be incredibly insightful. The only limiting factor was that it details only the ways in which algorithm decision-making is used within the US but still a highly relevant read. Eubanks describes poverty within the US as a ‘digital poorhouse’ in that high-tech systems have been used in a way to worsen the divide. She provides examples of social justice systems within the US (Indiana, LA and Allegany County) to underline the failure of computer systems, and I found these to be interesting case studies.

I think this provides some useful ground for exploration in my essay, bringing to light how sometimes efficiency and cutting costs can be damaging to welfare and society, thus driving people further into povetry.

Leave a Reply