15.04.2018 || RESEARCH AND SOURCE EVALUATION 6 (Facebook)

How Facebook ‘likes’ predict race, religion and sexual orientation

Tinker, B. (2018). What you ‘like’ on Facebook says a lot about you. [online] CNN. Available at: https://edition.cnn.com/2018/04/10/health/facebook-likes-psychographics/index.html [Accessed 15 Apr. 2018].

People are more than willing to provide their information to social media platforms without considering its uses. Perhaps they aren’t concerned about how that data is going to be used. However some people believe that users have a right to know how their data will be used regardless. According to a study by the Proceedings of the National Academy of Sciences USA, big data can know you better than a friend with 70 of your Facebook likes and better your spouse with just 300 Facebook likes, and can predict which political party you support with 85% correctness. This strategy is what was used by Cambridge Analytica in Trump’s ‘whisper’ campaign which micro-targeted individuals specifically, based on their psychographic profiles.

Summers compares sharing your data online as ‘leaving digital breadcrumbs’, which are data points that feed into companies targeting strategies. The research done by the PNAS really opened my eyes to the power of big data and how aggregating the right data points can generate so much insight on an individual. At the same time, however, I do think that this just represents a shift in the dynamic of the marketing market. In the past, adverts were general and broad and tailored to a large, general demographic of people as they were broadcast on TVs and displayed on billboards. The emergence of the Internet has opened up a whole new avenue for marketers to tailor their adverts whilst minimising costs by micro-targeting individuals who are, say, on the border of buying a product. I don’t think this is necessarily a bad thing, as it saves consumers a lot of fuss when they see things they would be interested in.

At the same time, there is a danger of people only being shown what they’re interested in and not being aware of everything they’re not being shown which many people consider to be problematic when it comes to news. A book that I read pointed out that as citizens we should be aware of things that go on around the world, and to an extent have a ‘civilian duty’ to be aware; this shouldn’t depend on an algorithm. There is also problems in the way this data can create an imbalance of power, as with the lead-up to the election. This may be seen as manipulative and unethical, particularly because users were not aware that the advertising was tailored towards them and did not have the option of opting out of it.


The shady data-gathering tactics used by Cambridge Analytica were an open secret to online marketers. I know, because I was one.

Samuel, A. (2018). The shady data-gathering tactics used by Cambridge Analytica were an open secret to online marketers. I know, because I was one. [online] The Verge. Available at: https://www.theverge.com/2018/3/25/17161726/facebook-cambridge-analytica-data-online-marketers [Accessed 13 Apr. 2018].

In 2014, Cambridge Analytica hired a researcher to gather information on Facebook user. This researcher – Aleksandr Kogan – developed an app names ‘This Is Your Digital Life’ which 300,000 Facebook users downloaded, most of whom were paid. This gave them access to data about the Facebook friends of the users of the app. Facebook allows developers to offer apps within it, and allows access to information on the friends of users, depending on their privacy settings.

Kogan claimed to be gathering data for ‘research’ but in fact sold this data to Cambridge Analytica – who are a company who claim to ‘use data to change audience behaviour’ politically and commercially. This was found out by Facebook in 2015, when the app was deleted and they, and all parties whom the data was sold to, were told to delete the data, which Cambridge Analytica did not do. Instead, the data was used to generate psychographic profiles to micro-target voters on Facebook and other online services. Cambridge Analytica claims to have between four to five thousand data points on every individual in the US.

However, in this article, Alexandra Samuel claims to be unsurprised by these revelations, claiming that this is not uncommon in marketing.  She describes it as a ‘well-known way of turning a handful of app users into a goldmine’, suggesting that this is a common practice. In fact, marketing is all about creating psychological profiles to alter behaviour. The difference in this case is that the data collection was disguised as a survey or a quiz, without the true intentions behind it being revealed.

What is clear is that there is potential to achieve unimaginable things, using the right data points and psychographic profiling. In some cases, this might be efficient and a great tool for marketing, maximising turnover and reducing costs by eliminating the need for other forms of communication and advertising. However I do think there are some instances in which it is not acceptable, especially with relation to this case as people who didn’t download Kogan’s app were affected on the basis that they were friends with a person who did. At the same time, this could be seen as a problem with the Facebook themselves, and the level of access they granted to developers. Regardless, I believe this emphasis why corporations – even larger ones like Facebook – ought to be regularly audited, with more stringent regulation in place. The writer points out however, that companies have ‘little incentive’ to be transparent, as this is the way that they generate profit, and this is a question of ethics therefore government intervention appears to be the only solution.

I have found this to be an interesting case, and it adds a different perspective on inequality to my project. As a result of this mass data collection, individuals were targeted with ads based on their psychographic profiles, which would have resulted in information asymmetry and influence decisions that they may not otherwise have made. This can probably be best described as ‘information inequality’ – which was explored in another source I read recently. I would like to explore how algorithmic profiling like this can lead to inequalities in information further in my essay.

Leave a Reply