SO-FAR

Skip to main content

The Age of Algorithm

Curator Caterina Riva interviews UC Berkeley scholars Massimo Mazzotti and Shreeharsh Kelkar on algorithmic systems, more relevant than ever after the rise of COVID-19.

Technologies of Care and XVI Quadriennale d’Arte

Elisa Giardina Papa, Technologies of Care, 2016. Installation view, detail, XVI Quadriennale d’Arte, Palazzo delle Esposizioni, Rome, 2017. Courtesy the artist.

Editor’s Note:


It’s been almost a year and a half since our inaugural Issue 1: Smart Cities closed with a postscript written by privacy activist Malavika Jayaram titled “ Privacy Is Not an Elitist Concern ”. Since then, the world has turned upside down, and more than ever, data privacy has inevitably (and perhaps even justifiably) been sacrificed in the name of survival and health. No one, it seems, can now mount a cogent argument against contact tracing and total digital surveillance in the face of a deadly disease. Since the global situation has so dramatically created an overwhelming urgency around these issues, the SO-FAR editorial team felt it was best to reprise Issue 1 with a second postscript on creating fairer algorithmic systems, this time presented by curator Caterina Riva.


A variety of meanings has been attributed to the ubiquitous word “algorithm”[01], as it gradually moves away away from its original definition of  “a procedure for solving a mathematical problem”. In order to better understand its transformations and use, I spoke to the person who has referred to our epoch as “the age of the algorithm”[02]. Massimo Mazzotti is a historian and sociologist of science at the University of California, Berkeley, where he is also the Director of the Centre for Science, Technology, Medicine & Society (CSTMS), “a laboratory 03. Mazzotti connected me with Shreeharsh Kelkar, who is a lecturer in the Interdisciplinary Studies program and affiliated with the Algorithmic Fairness & Opacity Working Group (AFOG), whose mission is to develop “policy recommendations around issues of fairness, transparency, interpretability, and accountability in algorithms and algorithm-based systems”[04]. In this interview, we identify algorithms as tools and look at some of their implications, including profiling and record-keeping.


Cleaning Emotional Data

Elisa Giardina Papa, Cleaning Emotional Data, 2020. Installation view, Algotaylorism, Kunsthalle Mulhouse, France, 2020. Courtesy the artist.


CR:   You have used the phrase “the age of the algorithm” to describe the times we are living in. Can you guide me through the different meanings the word algorithm has taken?


MM:   To say that we now live in “the age of the algorithm” means to recognise that algorithms are reshaping the social reality we live in, but it also means to acknowledge the role of the figure  of the algorithm in our language and the way it has captured our imagination. Nowadays, when we think about the future, we think about algorithms. This is not something entirely new in the history of technology; other artefacts have played a similar role in the past. Think about the early modern precision clock, for example. In Newton’s time, it was understood as a small-scale model for the entire universe and, at the same time, it transformed the world by making time scientifically measurable (and priceable) to an unprecedented extent. Like these precision clocks, algorithms today are both tools for thinking about the world and tools for transforming it.


Algorithms now act . They do things. They have broader effects on other systems. We can now say that “algorithms drive cars”, and thus imagine algorithms as agents .


The word “algorithm” has a fascinating semantic trajectory. It has been for a long time a fairly unexciting, technical term that referred to a set of instructions, a standard procedure for the solution of a particular kind of problem — a recipe, if you like. In this sense, there were algorithms for finding the positions of celestial bodies at a given time in a given place. The concept of the algorithm thus existed long before the term itself, which originated in the practice of medieval Islamic algebra, and was by no means limited to the sphere of mathematics. By the early modern period, an algorithm was associated with formal calculus, mechanical procedures — something that could be implemented in an artificial device or followed by a human who behaved in a machine-like fashion. The digital algorithm is thus the most recent outcome of a long durée [05] historical process. This is an algorithm that can be translated into strings of code that, in turn, can run on an electronic computer. But what I find most fascinating is that we don’t use the term in this restricted sense anymore. Algorithms now act . They do things. They have broader effects on other systems. We can now say that “algorithms drive cars”, and thus imagine algorithms as agents .


CR:   What kind of work do you do at CSTMS?


MM:   At CSTMS we have been studying the social dimension of algorithms since 2016, with a long-term project that has so far originated two major conferences and a number of publications. Our interest is twofold: we explore the ecology of algorithms — the network of socio-technical relations within which algorithms exists and function; and their genealogy  —  the long-term historical processes that have produced algorithms as artefacts and ways of thinking. We are also very interested in how scholars from the humanities and artists who use a range of media are exploring algorithms and algorithmic procedures in their work.


There seems to be an inevitable trade off here: power versus understanding.


CR:   Massimo, in your essay “Algorithmic Life” which appeared in the Los Angeles Review of Books in 2017[06], you talk about the “black box” nature of technology, where the dataset is hidden, inaccessible and unknowable to the user. What is the link between the abstraction of a data-led approach and its real effects experienced in the world? 


MM: This is one of the most intriguing points about digital algorithms. Those who work on their social significance often remark that they are very difficult objects to study, and not only because of proprietary issues. Some say that they are paradoxical, and maybe ultimately, unknowable. Unlike a mechanical clock, an algorithm is an object whose materiality is liminal, difficult to capture, and whose boundaries, as we have seen, are often blurred. They are embedded in our social world in a different way from, say, clocks.


Also, algorithms are easily “black-boxed”[07], meaning that we tend to bracket their inner workings and take them for granted, as if they were obviously reliable and transparent technologies. But they are not, and for a number of reasons. You mentioned datasets as a source of opacity. In fact, the structure of the dataset mined by an algorithm can be so opaque, that simply reading the code of the algorithm does not allow us to understand it. Another source of opacity is the way an algorithm can be modified through machine learning. This might result in a more accurate algorithm, but one whose architecture and logic is incomprehensible to a human being. There seems to be an inevitable trade off here: power versus understanding.


All algorithmic studies draw on contingent, flexible, just-in-time, human labour which is either high-paid or low-paid, outsourced or local. 


SK: Paul Dourish writes about the various meanings of the algorithm[08]: for engineers and computer scientists like him, an algorithm is the opposite of code. If code are lines in the programme they write, the algorithm is the underlying logic of that programme. An algorithm is used as the opposite of human judgment, in the sense of automation, and the adjective “algorithmic” means the opposite of transparent or explainable.


A better term for “algorithm” might be algorithmic systems which are made of algorithms, people (workers and users), rules and norms. As more and more institutions restructure themselves around algorithmic systems, they are reshaping the lives of the people who — in various capacities — interact with these systems. 


My colleague Jenna Burrell has written an important paper detailing the ways in which algorithmic systems can be opaque[09]. In a paper I recently published with another colleague Benjamin Shetakofsky[10], we describe how platform companies use algorithmic systems to manage their users by employing both humans and machines to do this. All algorithmic studies draw on contingent, flexible, just-in-time, human labour which is either high-paid or low-paid, outsourced or local. It is best to see algorithmic systems or artificial intelligence (AI) broadly as an assemblage — a reconfigured version of human-machine relations where humans are constructed, through digital interfaces, as flexible inputs and/or supervisors of software programmes that in turn perform a wide-variety of small-bore, high-intensity computational tasks involving the processing of large amounts of data and computing statistical similarities. It is this reconfigured assemblage that promises to change our workplaces and our lives, rather than any specific technological advance. 


Elisa Giardina Papa, Cleaning Emotional Data, 2020.

Elisa Giardina Papa, Cleaning Emotional Data, 2020. Still frame from video. Courtesy the artist.


CR: You are — at least geographically — very close to Silicon Valley and we have seen how big tech corporations have expanded their reach and profits even further during the Covid-19 pandemic. Besides privacy, do you imagine the next battlefield of digital control will centre around health?


MM: As I see it, the pandemic has not brought about anything particularly new in this respect, but it has definitely accelerated certain ongoing processes. The leading tech corporations have long been providing us with tools for physical distancing, and now the pandemic makes many of them suddenly necessary .


This makes me think of another turning point in the recent history of the relation between technology and society: 9/11. That crisis made it possible to turn massive data harvesting into a necessity . What is more important, it validated a certain logic of data-harvesting at a time when most data could not even be usefully analysed. That was a state of exception that transformed our notion of privacy. Today we see the intersection of this trajectory of surveillance technologies with new bio-political practices. One problem I see is that we tend to see this as a series of “crises” (“privacy crisis”, “health crisis”), rather than recognise and try to address their long-term, structural causes.


It’s remarkable how quickly we can get used to the idea of “permanent record”, and to accept it as something inevitable. If there is something that I learned from the history of technology, however, is that there are no inevitable, pre-determined trajectories.


CR:   In the 1995 film The Net,  the character played by Sandra Bullock is a systems analyst whose social interactions consist mostly of interactions with her landline phone and bulky computer until she is sent a floppy disk permitting access to a security software called Gatekeeper. From then on, not only is her life turned upside down, but all her records have been deleted — she no longer exists virtually. In 2020, the problem is somewhat inverted: we have too many traces of ourselves dissipated online and we cannot erase the content we would like to bury. What are your thoughts on what Edward Snowden calls a “permanent record”[11]?


MM: It’s remarkable how quickly we can get used to the idea of “permanent record”, and to accept it as something inevitable. If there is something that I learned from the history of technology, however, is that there are no inevitable, pre-determined trajectories. They look like that only retrospectively. At the height of the arms race, one of my teachers, the sociologist Donald MacKenzie, wrote an article on how to “uninvent” nuclear weapons[12].


While not devoid of irony, the article made a very serious point: the technological landscape in which we all live is the outcome of collective choices, not of necessity. This means that it can be reshaped based on new priorities and new ways of imagining our future. I’d like to connect with that expression of hope. Far from being naïve, it turned a sophisticated understanding of the social dynamics of technological change into a message that was empowering and open-ended.


SK: There has been a lot of scholarly discussion on how this permanent record becomes the basis of producing knowledge about users, which is then used to shape users’ behaviours to lead to more profits. In her book, Shoshana Zuboff calls this “surveillance capitalism” and details the ways in which predictive algorithms lead to the curtailment of human freedoms[13].


I want to emphasise a different concern which gets underplayed when all the focus is on predictive algorithms. Google wants to index all the information in the world and Facebook wants to make all activities social. These aspirations seek to dismantle the boundaries between activities so that everything is mediated through those platforms: the distinctions between social/commercial, expert/non-expert, political/non-political, personal/public are all blurred. Twisting a phrase coined by the legal theorist Helen Nissenbaum, the goal today (of platform companies like Google and Facebook, but also of law enforcement and other government institutions) is to blur, or even exterminate, the contextual integrity of particular activities[14]. To take your example, what kinds of records should be considered relevant to health? This should be a matter of public discussion rather than being left to particular actors like platform companies, insurance companies or hospitals. 


They are intrinsically socially toxic, regardless of the intentions of their makers. They would need to be regulated so strictly that they should be banned for almost all practical purposes.


CR:   In terms of surveillance technology, governments around the world have sped up processes which allow more invasive implementation. I was reading an article on the Times [15] about Clearview AI, an American technology company which provides a new research tool, with facial recognition capabilities, for law enforcement agencies to identify perpetrators and victims of crimes[16]. Is there some literature or research you think is useful in navigating these issues?


MM: I’m very wary of facial recognition technologies. I believe that their potential benefits are vastly outweighed by their risks. There are two main orders of problems to consider. One is technical, which is connected to actual design of these technologies. There are major flaws in the process through which human faces are schematised and categorised. This problem has been illustrated effectively in an impressive exhibition, Training Humans (2019), created by Trevor Paglen and Kate Crawford, which emphasised the gender and racial bias embedded in the standard datasets used for AI training[17]. Secondly, and connected to this are the pervasive practices of mass profiling and surveillance enabled by facial recognition technologies. My colleague Luke Stark aptly analogises them to hazardous nuclear technologies[18]: they are intrinsically socially toxic, regardless of the intentions of their makers. They would need to be regulated so strictly that they should be banned for almost all practical purposes.


Elisa Giardina Papa, Cleaning Emotional Data, 2020. Installation view, Algotaylorism, Kunsthalle Mulhouse, France, 2020.

Elisa Giardina Papa, Cleaning Emotional Data, 2020. Installation view, Algotaylorism, Kunsthalle Mulhouse, France, 2020. Courtesy the artist.


SK:   The most useful research I’ve found on how law enforcement uses surveillance technologies (including but not limited to facial recognition) is Sarah Brayne’s paper “Big Data Surveillance: The Case of Policing”[19]. Brayne was embedded within the Los Angeles Police Department (LAPD) for more than a year and was able to observe closely how officers use different sorts of surveillance technologies. She found that officers are less likely to use technologies that infringe on their autonomy, so they tend to downplay the algorithmic risk assessment scores for criminals (who is likely to be dangerous or an immediate threat) because these might conflict with their own intuitions. On the other hand, officers are more likely to use technologies that expand their professional reach, like Palantir[20], a software which can link different databases to create a vast system of records. These software act like surveillance systems which work in real-time, such as alert-based systems that alert officers when a new record for a person of interest gets created, . for example when a person gets pulled over. Previously officers would have to search through databases about a person of interest; now they can get real-time alerts about them.


High-status workers like law enforcement officers, judges, journalists and doctors can use these systems and still retain their autonomy while lower-status workers are often forced by these systems into more extractive, less autonomous, routines. This is the central problem in the use of algorithmic systems within the workplace: the curtailment of worker autonomy and its consequences.


Author’s Note:


After interviewing Mazzotti and Kelkar on several theoretical issues around algorithm systems, it felt important to invite an artist to present their practice and research on similar topics. Elisa Giardina Papa [21] is an Italian artist whose work investigates gender and precarious labor within digital economies. She has been developing a trilogy of works —  Technologies of Care (2016) , Labor of Sleep (2017) and  Cleaning Emotional Data (2020) — exploring the ways in which labor is reframed by digital economies and AI [22] . She kindly agreed to feature fragments of her artworks as part of this enquiry.


In  Technologies of Care , she documents the ways and stories in which service and affective labour are being outsourced to internet platforms and to freelance workers, female for the most part, working from the Global South for customers in the Global North. Following on the heels of  Technologies of Care , Elisa Giardina Papa describes the work  Cleaning Emotional Data : “In the winter of 2019, while living in Palermo and researching affective computing systems, I ended up working remotely for several North American “human-in-the-loop” companies. Among the tasks I performed were the taxonomisation of human emotions, the annotation of facial expressions and the recording of my own image to create datasets for AI systems that supposedly interpret and simulate human affects. While doing this work, some of the videos in which I recorded my emotional expressions were rejected. It seems that my facial expressions did not fully match the “standardised” affective categories provided to me. I was never able to learn whether this rejection originated from an algorithm or, for example, from another gig worker who, perhaps due to cultural differences, interpreted my facial expressions in a different way.”[23]


  • 01.

    "Algorithm," Merriam-Webster. https://www.merriam-webster.com/dictionary/algorithm

  • 02.

    Massimo Mazzotti, “Algorithmic Life,” Los Angeles Review of Books, January 22, 2017, https://lareviewofbooks.org/article/algorithmic-life/

  • 03.

    “Mission,” Center for Science, Technology, Medicine & Society, http://cstms.berkeley.edu

  • 04.

    Algorithmic Fairness and Opacity Working Group, https://sites.ischool.berkeley.edu/afog/. See updated site Algorith-mic Fairness and Opacity Working Group, http://afog.berkeley.edu

  • 05.

    A term that literally means "long duration" introduced by the French historian Fernand Braudel. It is a standard term of reference in the work of the Annales School, which Braudel helped to establish. It is used to indicate a perspective on history that extends further into the past than both human memory and the archaeological record so as to incorporate climatology, demography, geology, and oceanology, and chart the effects of events that occur so slowly as to be imperceptible to those who experience them, such as the changing nature of the planet or the steady increase in population in a particular area. "Longue durée," Oxford Reference. https://www.oxfordreference.com/view/10.1093/oi/authority.20110803100114325

  • 06.

    Massimo Mazzotti.

  • 07.

    Massimo Mazzotti.

  • 08.

    Paul Dourish, “Algorithms and their others: Algorithmic culture in context,” Big Data & Society, (December 2016). https://journals.sagepub.com/doi/full/10.1177/2053951716665128.

  • 09.

    Jenna Burrell, “How the machine ‘thinks’: Understanding opacity in machine learning algorithms,” Big Data & Society, Vol. 3, No. 1, June 2016. https://journals.sagepub.com/doi/10.1177/2053951715622512

  • 10.

    Benjamin Shestakofsky and Shreeharsh Kelkar, “Making platforms work: relationship labor and the management of publics,” Theory and Society, Vol. 49, August 2020, 863-96. https://link.springer.com/article/10.1007/s11186-020-09407-z

  • 11.

    Permanent Record is the title of Snowden’s autobiography. See Edward Snowden, Permanent Record ( https://us.macmillan.com/books/9781250237231

  • 12.

    Donald MacKenzie and Graham Spinardi, “Tacit Knowledge, Weapons Design, and the Uninvention of Nuclear Weapons,” American Journal of Sociology, Vol. 101, No. 1 (July 1995), 44-99.

  • 13.

    Shoshana Zuboff, The Age of Surveillance Capitalism (London: Profile Books, 2019). https://www.publicaffairsbooks.com/titles/shoshana-zuboff/the-age-of-surveillance-capitalism/9781610395694/

  • 14.

    Helen Nissenbaum, Privacy In Context: Technology, Policy, and the Integrity of Social Life (California: Stanford University Press, 2009). https://www.sup.org/books/title/?id=8862

  • 15.

    Kashmir Hill, "The Secretive Company That Might End Privacy as We Know It," The New York Times, January 18, 2020. https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html

  • 16.

    See Clearview AI, https://clearview.ai/

  • 17.

    “‘Training Humans’, conceived by Kate Crawford, AI researcher and professor, and Trevor Paglen, artist and researcher, is the first major photography exhibition devoted to training images: the collections of photos used by scientists to train artificial intelligence (AI) systems in how to ‘see’ and categorize the world.” See “KATE CRAWFORD | TREVOR PAGLEN: TRAINING HUMANS,” Fondazione Prada, http://www.fondazioneprada.org/project/training-humans/?lang=en

  • 18.

    Luke Stark, “Facial recognition is the plutonium of AI,” XRDS: Crossroads, The ACM Magazine for Students, Vol. 25, No. 3 (April 2019), 50–5. https://dl.acm.org/doi/10.1145/3313129

  • 19.

    Sarah Brayne, “Big Data Surveillance: The Case of Policing,” American Sociological Review, Vol. 82, No. 5 (2017), 977-1008. https://journals.sagepub.com/doi/10.1177/0003122417725865.

  • 20.

    See Peter Waldman, Lizette Chapman and Jordan Robertson, “Palantir Knows Everything About You,” Bloomberg, April 19, 2018, https://www.bloomberg.com/features/2018-palantir-peter-thiel/

  • 21.

    ElisaGP, http://www.elisagiardinapapa.org/

  • 22.

    For further reading on the artist’s practice: Hu, Tung-Hui and Elisa Giardina Papa. “How AI Manufactures a Smile: Tung-Hui Hu Interviews Artist Elisa Giardina Papa on Digital Labor.” Media-N 16, no. 1 (2020): 141–50. https://iopn.library.illinois.edu/journals/median/article/view/360

  • 23.

    Lucia Longhi, "Technologies of Care: An Interview with Elisa Giardina Papa," Berlin Art Link, February 11, 2020. https://www.berlinartlink.com/2020/02/11/technologies-of-care-an-interview-with-elisa-giardina-papa/

Artists and Contributors

Caterina Riva

Caterina Riva

Caterina Riva is an Italian curator and currently the Director of MACTE– Museo di Arte Contemporanea di Termoli, Italy. Riva holds an MA from Accademia di Brera in Milan and an MFA Curating from Goldsmiths College London and was Curator at LA SALLE's ICA Singapore between 2017 and 2019. She lived and worked in London, Auckland, New Zealand and Singapore and she is now back in Italy.

 Massimo Mazzotti picture

Massimo Mazzotti

Massimo Mazzotti is a professor at the University of California, Berkeley, where he teaches history of science and science studies. He is the director and co-founder of the Center for Science, Technology, Medicine, and Society. His research explores the genealogies of techno-mathematical modernity and their political meaning.

 Shreeharsh Kelkar

Shreeharsh Kelkar

Shreeharsh Kelkar is an interpretive social scientist and a Lecturer at UC-Berkeley in Interdisciplinary Studies. His research focuses on how our new computing infrastructures of humans, algorithms, software, and data (or “AI”) are changing labor, work practices, and expertise. His work has been published in many journals including New Media and Society and Theory and Society and has been mentioned in the Financial Times and Harvard Business Review. He is currently working on a manuscript tentatively titled "Reinventing Expertise: Technology Reformers and the Platformization of Higher Education.” The book describes how reformers are transforming institutions through the new new practices of “data science” and “design thinking” and argues that these practices reconfigure our notions of what an expert does. To read more, see http://shreeharshkelkar.net.