12.04.2018 || Research and Source Evaluation 4

Privacy, Poverty, and Big Data: A Matrix of Vulnerabilities for Poor Americans

Mary Madden, Michele Gilman, Karen Levy, and Alice Marwick, Privacy, Poverty, and Big Data: A Matrix of Vulnerabilities for Poor Americans, 95 Wash. U. L. Rev. 053 (2017).

Available at: https://openscholarship.wustl.edu/law_lawreview/vol95/iss1/6

This is a review on big data, and how it disadvantages the poor among the American population. I found it very insightful, and it was well-cited throughout with clearly referenced data. Having been published relatively recently too, the information that it provides is up-to-date and relevant, though some of the policies and data protection laws it mentions are specific to only the US so I will have to do further research to find out what the differences are in the UK.

In this document, the writers discuss how poorer people may be disadvantaged as a result of big data. For example, employee tracking has become increasingly popular over recent years and this might discourage poor people with insufficient knowledge around privacy from accessing help, if the need arises. According to this report, poor Americans are more susceptible to tracking due to being less informed on managing privacy settings etc. Identity theft, statistically, therefore affects the poor more and this can have damaging long-term impacts such as impacting credit scores and reputational harm.

Though big data undoubtedly provides wider insight and the ability to spot trends in complex problems, there is a concern that it can lead to people being mischaracterised, and this data then being used to offer predatory services such as payday loans. The black box nature of some of the algorithms used in big data means they often go unquestioned.

52% of poor people amongst a sample expressed concern about not knowing what information is collected about them, highlighting that consumers often may not have sufficient knowledge to use online services in a way that ensures personal data is kept private, and this is especially common amongst the poor. In fact, the document mentions that the legal system relies largely on people to ‘police their own privacy’ as there is limited regulation in place – because it is very difficult to regulate. Even if information is found to be incorrect, it is often difficult for individuals to get this corrected.

Another way in which big data analytics may disadvantage poorer people is through Application Tracking Systems, which are being used more commonly to determine employability. This is based on mining applicants’ social media profiles, determining their ‘social capital’ based on their networks and connections and generating an insight into their personality. According to a survey conducted by the writers of this report, poorer people are less likely to use privacy settings, and are less aware of how much of their data can be publicly accessed.

Fundamentally, this source highlights the fact that it’s not the standards themselves that are discriminatory, but can have disparate impact. The document says ‘big data discrimination is likely to be unintentional, as data mining involves statistical correlations that do not require conscious efforts to target specific groups’. The discrimination is more likely to occur as a result of historic patterns of discrimination that have initially created a divergence between rich and poor, people of different backgrounds etc. For example, policing across the US is not consistent, and more arrests occur within minority communities which means algorithms begin to associate race with crime, unless this is monitored and corrected.

The enforcement of analytics tools uses inferences such as social media, which can be revealing of traits such as obesity or smoking which, along with other indicators can lead to a vicious cycle, reducing social mobility and ensuring low-income earners are not able to progress. Although education can help an individual progress, similar systems are widely used in the USA to determine whether a person gets a place at university. This is not to mention the financial burden and so goes to show how data analytics systems, if unquestioned and unregulated can severely limit social media and increase inequality between the richest and poorest in the population. Social media is also being used increasingly as a factor to determine suitability. The concern is how this is determined i.e. networks, connections, footprint – and again, the black box nature of the algorithms means this is difficult to identify. Again, this is likely to affect the poorer population negatively, especially because of the lack of awareness that such algorithms exist which can further exacerbate poverty, discouraging individuals from progressing. Some analysis even includes who you network with – which is a questionable way to determine your ‘value’ to a university. There is much scepticism around this sort of ‘data determinism’.

After reading this document, I feel as if perhaps we put too much faith in computers to make decisions for us. Removing human beings entirely from certain processes could in fact lead to more bias, due to the way in which machine learning occurs though big data certainly has many advantages which I will evaluate in more detail in my project, along with how they affect different socioeconomic groups. Whereas it is now harder for employers to discriminate based on a person’s physical appearance, the algorithms that employers use have ultimate control, and these are used to make significant, life-changing decisions. The consequences are especially severe with systems used to determine the ‘threat level’ of an individual, and how likely they are to commit a crime based on factors such as social media, locality and residence. These could limit an individual’s prospects and ability to progress out of low-income, and so suggests that big data can increase inequality.

 

Data Brokers: A Call For Transparency and Accountability: A Report of the Federal Trade Commission (May 2014)

https://www.ftc.gov/system/files/documents/reports/data-brokers-call-transparency-accountability-report-federal-trade-commission-may-2014/140527databrokerreport.pdf

This Federal Trade Commission Report discusses how consumer data is often used without consumers’ knowledge of its use. Data brokers aggregate data from a number of different sources (including criminal records, stores and websites). This data is then sold on to clients. One of the biggest examples of this is Acxiom which is believed to have data on almost every single member of the  US population. The report describes how data is often used to infer more about the behaviour/ character about an individual, and place them into categories (eg. Expectant parent, old-age pensioner), some of which could be deemed sensitive. An example they provide is ‘Urban Scramble’ which consists of a majority of low-income ethnic minorities.

The document describes how the collected data can be used to both benefit and disadvantage the consumer. For example, a consumer may be advertised sugar-free snacks based on the information that they are diabetic. However this same insight could be used by insurance companies against them: to classify them as a higher-risk individual.

Another concern that the Federal Trade Commission discusses in this report is the lack of transparency in offering consumers access to their data – so not only do they not know what is being collected about them and potentially used against them, they have little choice in the way of disabling this data collection.

What is obvious, however, is that these insights can be very useful to firms in identifying and protecting themselves from fraud. Some data brokers offer a ‘risk’ rating based on criminal records, social security numbers etc. which can be used to verify the legitimacy of a customer. This diagram illustrates that risk mitigation is the second highest source of revenue for data brokers in 2012. Though insightful, I will need to view a more up-to-date version of this to make a judgement as to whether the costs to firms from fraud etc. outweighs the cost of the loss of privacy to consumers.

The report mentions several different ways that consumer data is used in marketing, including:

  1. Direct marketing
    1. Data append
    2. Marketing lists
  2. Online marketing
    1. Registration targeting
    2. Collaborative targeting
    3. Onboarding

Again, this is a report from a US agency, so I’d be interested to find out whether the same applies within the UK. This report is also a few years old, so I would like to find out a more recent/ updated version to find out whether the scenarios described in this report still occur.

 

How Marketers Use Big Data To Prey On The Poor

Taube, A. (2013). How Marketers Use Big Data To Prey On The Poor. [online] Business Insider. Available at: http://www.businessinsider.com/how-marketers-use-big-data-to-prey-on-the-poor-2013-12?IR=T [Accessed 12 Apr. 2018].

This article confirmed what I have read in other sources about lists that are put together to categorise consumers and offer tailored ads. It identifies that despite the regulation in place to stop data being used to sell debt-related products, the lack of enforcement of regulation and industry oversight means this isn’t always adhered to. However, according to the Pew Charitable Trusts, payday loans are prohibited in 14 states. This does not stop these individuals being targeted for other services. Data brokers such as Experian identify customers meeting certain requirements, which is why they may be categorised by financial vulnerability.

Though this is certainly an enlightening read, the source provides little information as to whether these lists are in fact truly used to target individuals with exploitational advertisements. I would like to look further into whether today, four years from when this article was written, there is more regulation surrounding the compilation of such lists and the situation in the UK.

 

Big Data, Facebook and The Future of Bank Marketing                    

The Financial Brand. (2017). Big Data, Facebook and The Future of Bank Marketing. [online] Available at: https://thefinancialbrand.com/68385/big-data-marketing-advertising-artificial-intelligence/ [Accessed 10 Apr. 2018].

This article discusses how advertisements are not necessarily bad. In fact, targeted advertisements can save people from the hassle of searching for things consciously and show people what they might be interested in without any effort on the individuals’ part. At the same time, this kind of psychographic profiling is also what supposedly enabled Donald Trump o win the US election.

During the run-up to the US election, Trump’s digital director for the Trump campaign spent their budget on Facebook. The reason behind this is because it offers targeted advertising to people about things that they specifically care about – micro targeting – rather than general ads on TV which can not be tailored.

This goes to show the huge amounts of power that big data can and I think this is a very interesting and relevant example of how targeted advertising is not always a great thing. Though it increases efficiency, it results in huge asymmetries and inequalities in information, and therefore has the capacity to manipulate people – on an individual to individual basis – of making decisions that they may not have done otherwise. This may be particularly significant amongst low-income groups, as, according to previous sources that I have read, they use social media on a more regular basis. This would make it an ideal platform to reach and slowly sway an individual’s vote. Not only this, but they are less likely to have privacy settings correctly configured, and on the whole are less confident in doing this. This means there is more data available about them that can be used to build a complete profile about them, and therefore pinpoint them based on their character exactly, and if an individual is not fully educated/ aware about the candidate form other sources which is more than likely with low socioeconomic groups, the targeted ads are more than likely to convince them to vote. This shows how inequalities in information can have a huge impact.

I’d like to discuss the relative merits and drawbacks of targeted advertising in my project, and research further into the Trump campaign and the role of Cambridge Analytica and Facebook within it.

 

To work for society, data scientists need a hippocratic oath with teeth

Upchurch, T. (2018). To work for society, data scientists need a hippocratic oath with teeth. [online] Wired.co.uk. Available at: http://www.wired.co.uk/article/data-ai-ethics-hippocratic-oath-cathy-o-neil-weapons-of-math-destruction [Accessed 10 Apr. 2018].

Cathy O’Neil argues that accuracy and efficiency aren’t the most important factors in an algoritm – fairness is essential too. I agree in that considerations have to be made about the privacy of data, and there needs to be an increased awareness of how our data is used. The public should have more control over their own data.

Cathy also suggests that algorithms must be audited, to prevent them from making discriminatory decisions – which there is evidence of occurring in the UK, in policing. She suggests the idea of a ‘Hippocratic oath’, a code of conducts that data scientists must adhere to as she believes there is a wide gap between the data scientists and the general population (who are affected), meaning that they do not consider the wide impact of their algorithm’s design within society. She points out that her recent experiences with people who make such systems do not always do this, and don’t always consider themselves responsible for the consequences it can have without questioning the fairness of the proxies that are used to make certain decisions i.e. locality can strongly correlate with race.

She outlines some of the features that are essential to an algorithm: it must be an improvement of the process that it is intended to replace, which is often surprisingly not the case. The algorithms must also not only work for a select group of people, and add to society rather than cause harm. The problem seems to be however is that companies seek to maximise profit, and this often clashes with ethical standards. With the US specifically, the party in power benefits from the way their policies are promoted across social media meaning that setting up an official code of conduct and regulation is not likely in the upcoming years.

 

UK police are using AI to inform custodial decisions – but it could be discriminating against the poor

Burgess, M. (2018). UK police are using AI to inform custodial decisions – but it could be discriminating against the poor. [online] Wired.co.uk. Available at: http://www.wired.co.uk/article/police-ai-uk-durham-hart-checkpoint-algorithm-edit [Accessed 10 Apr. 2018].

Durham Constabulary have worked alongside computer science academics to produce an intervention tool that predicts the risk of suspects committing repeat crimes. It uses data such as age, gender, address and past offences.

However, the tool is being reformed to remove the postcode data as it reinforces ‘existing biases’ within the judicial system, as future models may then pinpoint these areas more specifically. This is a problem because the algorithm would then profile individuals from those particular areas as being of higher-risk, even if that is not always the case.

The algorithm used by Durham Constabulary is an example of a black box, as it makes use of over 4.2 million points. There is wide scepticism about whether these sort of ‘opaque’ algorithms are acceptable, though the police force say it would not be in the public’s interest to have it revealed, though they are open for it to be audited. However there are still questions about whether algorithms should be used to make predictions and therefore inform decisions, especially within policing.

This was the first, recent case I’ve read about algorithmic decision-making within the UK. I think it’s difficult to make predictions about a human being’s actions and behaviour in the future, regardless of whether an algorithm is used. Algorithms alone cannot be relied on alone to make judgements; human assessment must be part of the process.

 

Can an algorithm hurt? Polish experiences with profiling of the unemployed

Niklas, J. (2017). Can an algorithm hurt? Polish experiences with profiling of the unemployed – Centre for Internet & Human Rights. [online] Cihr.eu. Available at: https://cihr.eu/can-an-algorithm-hurt/ [Accessed 11 Apr. 2018].

I found this a really interesting read, as most of my research so far has involved looking at systems that exist within the UK and the US. This article describes how algorithmic decision-maki ng is used within Polish job centres to sort the unemployed. This intends to increase job centre efficiency, and also tailor services for each individual Polish citizen.

Essentially, surveys and interviews are used to gather 24 data points about each individual – such as age, disability, foreign languages that they can speak etc. This is used to determine the level of assistance a person can get access to.

The main issue with the use of this profiling tool is that it has a significant part to play in the individual’s life, however there is a lack of transparency about the algorithm meaning that the affected people don’t know which aspects of their life are limiting the support they get. This may limit certain groups from progressing, and may leave them stuck in relative poverty. There may be assumptions embedded into the software, and the article mentions that they might affect people belong to ‘vulnerable groups’ more as the data points used include age, gender and disability but there is limited awareness as to how.

Regardless of how the information is used, I think the removal of human beings and depending entirely on computers can be problematic. Particularly in the case of a job centre, which has the aim of helping the unemployed, this sort of algorithmic decision-making prevents people from ‘climbing the economic ladder’, so to speak.

 

Google: Payday Loans Are Too Harmful to Advertise

https://www.theatlantic.com/business/archive/2016/05/google-payday-loan-ads/482340/

 

Following some research into what the regulation is on payday loans, I have discovered that Google banned the use of Payday loans which they have deemed as doing more damage to the population than good. This indicates a level of corporate responsibility, which is essential when we are living in an era of digitalisation. This is significant, especially for the poorer members of the population who are likely to look online for a quick fix to a financial problem, and these sorts  of loans can only worsen their condition and push them deeper into poverty. Measures like this can help fight inequality and bridge the gap between the rich and the poor. However, it is only Google as of yet that has taken this step, so though it has a very large presence, there is still more to be done.

Leave a Reply