The readings for this class add another layer to the conversation we’ve been having about social media, cell phones, and Google: the code that underlies their operation. This code – as we will talk about with the term algorithm – is written by people, with certain purposes in mind. An algorithm is just a recipe that tells a computer of some kind what to do if different scenarios arise. But they also help to process many different attributes of the scenario. Among the many things that Google’s search algorithm takes into account of over 100 different factors (Links to an external site.) – when deciding which links to list on your search results: there are even more it factors in if you are signed into Google and it can tailor them to you based on your search history. If you do this on an one of Google’s Android devices (as almost 55% of users do (Links to an external site.)) or through Google’s Chrome browser (the browser used by 63% of internet users worldwide (Links to an external site.)) then an algorithm is processing even more factors before it decides what results to show you – and especially, which advertisements you might want to see: whatever else it does with its proprietary algorithms, it prioritizes using them to auction and sell advertisements. Last year, sales of online advertising made up almost 87% of Google’s almost $111 billion in revenue (Links to an external site.). They may claim their algorithms simply help organize the world’s information*, but they should follow that with the disclaimer “*For Advertisers.” Google, Facebook, and countless other digital companies have proven you can do amazing things with algorithms – much like Domingos imagines in our reading for this week.
But, as Virginia Eubanks, Zeynep Tufekci, and Safiya Noble discuss, algorithms are not merely automated computer programs: they are code that has been written by humans for certain purposes. What effect does the prioritization of commercial interests and ideologies have on the algorithms used by Google or Facebook? How are our preexisting and widespread biases around race, gender, or class amplified by algorithms they use to sort the information we see? And if artificial intelligence is mostly learning how to think by watching us, what does it mean that a frequent outcome is that users train their robots (Links to an external site.) or a chatbot (Links to an external site.) to be racist?
I know that we have had a bit of whiplash moving from last class to this one – from the moral panic over cell phone use, to the more sociological and critical cultural explanations of networks, virtual communities, and emergent online teen life. And we will have a similar dynamic in this week’s readings. Make sure to look at the Domingos because he helps to remind us of the ideal of what could be done if algorithms were used only for social good. Then again, Virginia Eubanks would argue that this would depend on what you mean by “social good:” she explores the use of algorithms in managing welfare benefits, housing for the homeless, and child welfare agencies, finding that many of the traits their software identifies as suspect are basically the traits of people in need, criminalized by a culture that deeply despises the poor. Tufekci and Noble help illustrate how these human-crafted algorithms contribute to problems in our Google search and Facebook feed – problems, as we’ll explore next week, that led to the explosion of fake news and dark ads in the run up to the 2016 election.
The point here is not to say everything about these platforms sucks, but that the ways that these platforms suck are not inherent to the algorithms themselves: the suckiness is something people chose to create; different people, making different demands, could code something different.
Other than your overall response to the readings and videos, how do you understand algorithms having read this? Do you recognize these and other places where they may affect what you see, what you know, and even what you do? Do you think it is possible to have an algorithms produce culturally and socially beneficial information without reproducing the already existing hierarchies and prejudices? Would that be desirable?
READINGS:
Pedro Domingos, The Master Algorithm, Download The Master Algorithm, ch. 1
Virginia Eubanks, Automating Inequality**, read Introduction and ch. 5
https://medium.com/message/ferguson-is-also-a-net-neutrality-issue-6d2f3db51eb0Zeynep Tufekci, “The Real Bias Built In At Facebook (Links to an external site.)” and “What Happens to #Ferguson Affects Ferguson
WATCH: