SO-FAR

Skip to main content

Researching Alice, Bob and Eve, Part 2

Artist Teow Yue Han shares notes with co-founder of BasisAI Liu Feng-Yuan on developing trust, understanding and empathy for AI systems.

Teow Yue Han - Alice, Bob & Eve

Alice, Bob & Eve (2019) conceptualised by Teow Yue Han in collaboration with RAW Moves dancers. Photo: Kaier Tan.

Around 1978, Professors Ron Rivest, Adi Shamir and Len Adleman came up with the code names of “Alice” and “Bob” to help ordinary people understand the abstract ways in which public key encryption worked. Fast-forward nearly a decade, and another persona “Eve” was introduced as a third-party cryptographic character to articulate what happens when information exchange fails, such as data leaks, where a third-party entity hijacks and accesses the transmitted information. Eve became central to the Alice and Bob characters, “cemented with her inclusion in the dramatis personae of cryptography published in Bruce Schneier’s Applied Cryptography [1].  


As artist-researchers of the Movement Research Living Laboratory, we were not looking to present a literal interpretation of the original Alice, Bob and Eve cryptography story. Clearly, we did not have real humanoid prototypes being tested and developed. Neither did we have three specific characters of Alice, Bob and Eve trying to send and crack secret codes with one another. Rather, there were particular lines of logic extracted from this concept: firstly, the personification of machinic processes to make them more relatable and understandable to humans; and secondly, the use of data, feedback and internal competition for self-training systems, for the purposes of refining them. Thus, we worked with three performers from RAW Moves’ dance company to embody, personify and physicalise aspects of machine learning logic and its processes. Like the anthropomorphic names that the data scientists gave to their codes, we wanted the dancers to help aid our understanding of these abstract, technical processes happening behind our screens and networks of data. This is where the parallels between dance, movement and the machine crossed, blurring the lines between the performer and the programmed, the human and the humanoid.


We crafted 3 unique “research events”, which took the form of performances by these three dancers, to examine our personal and interpersonal relationships with movement and new technologies with your participation and input. They were:


  • Calibrating Human-Machine Encounters

  • Scheduled Machine Learning

  • Controlling the Pace of the Smart Nation

A few months after the performance, SO-FAR introduced me to Liu Feng-Yuan, who co-founded a start-up called BasisAI that elucidates and audits machine learning processes. As I spent time recounting the layers of Alice, Bob and Eve, we shared a mutual understanding on the complexities of interpreting and explaining AI. He was doing it through code and with real use cases; I through choreography and speculative performance. 


Teow Yue Han: The first research event called “Calibrating Human-Machine Encounters” tracked how the dancer-humanoids would perform simple gestures successfully. Each of them had a prepared gesture, but it lacked a certain humanness about it. For example, if they were to hug someone, the hug would stop abruptly — they would not pull the person in. This session was to demonstrate to each of the dancer-machines how the hug is supposed to look like for humans, and from there, the machines would adopt one addition to their movement to “improve”. The dancers performed in a more observational mode, using their bodies to enact the movements. 


After each 30-minute segment of data-gathering and interaction, there would be a chat with the audience and the dashboard would be presented. I would point out how to analyse the data and get around some of the issues we might have encountered in the session. At the same time, the strange thing was that the dancers were of course humans as well, so they were able to make their own intuitive decisions based on their prior kinaesthetic, embodied knowledge. 


 Teow Yue Han - Alice, Bob & Eve

Alice, Bob & Eve (2019) conceptualised by Teow Yue Han in collaboration with RAW Moves dancers. Photo: Kaier Tan.


Liu Feng-Yuan: And you were kind of playing with this “uncanny valley” [2] too, right? As AI is getting more realistic, there’s this weird space where people don’t really know how to interact with the machine because it is like a human, but not quite just yet. 


TYH: There was a machine-like selection process where the dancer had to decide who and how to engage with. We devised a way that they would move their head towards the first person that they locked eyes with, and they would move towards him or her. During the sharing session, the dancer would speak about who they were enacting, and we could see a changeover where they were just themselves preparing for the next event.


This is how we embodied the processes of machine learning. You’re working to explain AI to lay people as well. Can you provide me with a bit of the background for this? 


FYL: A lot of the common concerns with AI has to do with robotics, where there is an interaction with the real physical world and you omit human beings. But a lot of what we do at my company BasisAI is very software-based. From my point of view, a machine-learning algorithm that does predictions or recommendations is pretty harmless — it’s just math, computation and statistics, essentially a very sophisticated calculator. However, a lot of interesting moral, ethical and agency issues emerge where you take these algorithms, take humans out of the loop, and you link them up to the real world where they can cause actual damage or harm. 


Yet I also try to look back and ask, “Are these issues new?” Until Boeing recently had problems with their autopilot [3], they were very happy with it, while putting their passengers’ lives at risk through auto-landing. When something goes wrong, we do legislate for it — we blame Boeing. I’ll give you another example that has less to do with AI. When eBay was starting to list products, I met a lawyer who worked for the company. He said there was a huge debate on how eBay was morally responsible for selling fake products, but eBay would just claim, “Oh, We’re just a marketplace.” Later, Uber would say the same thing, “We’re not an employer of drivers, we're just a marketplace.” [4] I think this all evolves, so I’m not very worried about that part of the problem because I feel like it will go away. 


What is more challenging is, because these algorithms are getting more complex, how do we get people to gain trust in them? To some extent, we also don’t trust human beings. Why should we trust a surgeon? Even if he failed, can he really give you a good explanation why? Not really. So do we hold AI to a higher standard than human beings? Yes, probably — but also because we don’t trust it. Whereas a surgeon has his reputation, his accolades, his education, and so on. We are quite gullible as human beings, as to what and whom we trust. 


So do we hold AI to a higher standard than human beings? Yes, probably — but also because we don’t trust it.

Christina J. Chua: And trust requires explanations and understanding.


FYL: Yes. Let me show you a thing called TensorFlow Playground . It’s a visualisation tool that allows you to look at what goes on behind a neural network. I can add these nodes, redesign it, and feed in a particular image. Now each of them is making a guess, how it should respond and predict. Then you can see the neural network trying to sort dots into the right regions, according to an orange or blue background. Once it gets to the next layer of neural network, they get better as they iterate. They’re never quite 100% — it’s all quite probabilistic. You can think of these nodes of the neural network as a mini brain, which collects “votes” from the nodes. They’re constantly adjusting, deciding upon a weightage, as it gradually becomes cleverer. What I like is that Tensorflow exposes and uncovers the process through which this “brain” sort of “votes”, and comes to a decision.


In a proper neural network, you may have millions of mini brains. So how do we even summarise this to come to an understanding for ordinary humans, as the problems become more multi-dimensional, as we have up to a million data points? Anyway, this is one illustrative way in exposing the process that could help in getting a better handle on it. 


Teow Yue Han - Alice, Bob & Eve

Alice, Bob & Eve (2019) conceptualised by Teow Yue Han in collaboration with RAW Moves dancers. Photo: Kaier Tan.


TYH: But is this even possible to approach or expose this process if the technology is proprietary?


FYL: Good question. Some of these companies use this excuse to explain away their liability. They say, “It’s proprietary tech, so I can’t let a third-party audit it.” They argue that if they reveal the inner workings of their algorithms, anyone would be able to spoof it. You could apply noise and try to fool it. So that commercial reason is a genuine concern. Can you develop a technology to audit the algorithm, where I don’t open up the black box [5], but I still give you some understanding and explanation for it? 


There is this idea of inspecting the neural network from the outside, perturbing or troubling it, that could present another solution. In this case, I don’t open the black box up, but I can poke it and it responds in a slightly different direction, and then I learn about it experientially. That’s how babies learn, they test their limits. Eventually I am able to feel all the parts of the elephant, and through that process, I can also gain some trust towards it.


Yet any explanation is, by definition, simplified. You know what they say, “All models are wrong, and some are useful.” The question is how to abstract and reduce the complexity and multi-dimensionality in some way that still conveys to you the explanation that you’d want. But that’s a very context-specific demand, so to a technical audience, a regulator, or a customer — they would each require different explanations. You first need to simplify, then you need to contextualise the explanation. 


CJC: So developing these explanations for artificial intelligence is what you’ve been working on with BasisAI? 


A lot of people are worried about AI because there’s no agency, and they feel helpless.

FYL: Yes. One of the things we are trying to do is provide a view on what an AI audit would look like. Say, if the bank denies you a loan, and you ask why, they may say, “It’s because of the algorithm.” Well, that’s not sufficient. We could try to show that Tensorflow, technical process to the layperson, but they probably wouldn’t understand. So what’s a further explanation? It could be because you have 10 credit cards, while most people have an average of 3 to 5. Or maybe it’s because you live in a rare property that is far away, like Pulau Ubin, giving you an unusual postal code. Which of these is the best explanation? Even if it may be statistically less important, you could shift the postal code reason up as more key to the customer, because it resonates with them more emotionally or psychologically. 


You see, if I give you an explanation that you don’t understand, is it because my explanation was bad, or because you were dumb (of course, I’m exaggerating)? Or maybe it’s because you were impatient. Each of these are context-specific. There is a lot of research into what are good explanations in the field of explainability of AI, and yet there is so much more to do, especially in user experience (UX). I think this whole space is not very clearly understood.


CJC: It’s this soft customer support that is often lacking in the chatbots we interact with. 


FYL: At the same time, you also have to do more than the explanation, because telling the customer why they got rejected is acceptable, but it’s not helpful. We also want to give people recourse or the ability to know what are their options. We say to the customer, “Ok, if your debt were reduced to $1000, you cancelled 3 of your credit cards, then you would have a better chance to be approved for a loan.” And that returns the agency to the human being. A lot of people are worried about AI because there’s no agency, and they feel helpless.


Teow Yue Han - Alice, Bob & Eve

Alice, Bob & Eve (2019) conceptualised by Teow Yue Han in collaboration with RAW Moves dancers. Photo: Kaier Tan.


TYH: Speaking about agency, and what human beings might “prefer” for AI, our second research event called “Scheduled Machine Learning” could be relevant. I was inspired by TikTok, where everyone dances the same dance, so there’s a huge dataset. Using all these videos, you could theoretically create the “ultimate dance machine” that can optimally entertain you. Of course with the Internet, the purity of dance forms have been completely diluted. We wanted to see which dance variation was more favourable to the audience, so we devised this challenge. 


The dancers had to perform three different genres, from contemporary dance, viral dances and ballet, while the audience learned from them. Then the audience performs back to the machine-dancer on a mat that could record their input. The machine looked at the humans, observing what they did with their head, hands, and so on. This process was repeated with 8 to 10 audience members, before the humanoid-dancer created a “learned” dance that was so-called “perfect” or preferred by the audience’s input. Of course, it was all definitely very speculative, but it reaped interesting results. 


For me, it’s not so much about how one day we will have future AI machines that look so much like humans. It’s more about how we can develop empathy to understand the machines, even if the tech is separated from our physical or material reality. That’s where the embodiment helps. 


FYL: Absolutely. The first part is explainability and recourse, and the second is fairness and algorithmic bias — and they’re two different things. How do I make these facets a lot more transparent, and provide people with the ability to make decisions about the AI? You could almost argue that this is relevant to any form of automated decision-making, not just AI. 


There was a case in the States, where the research group ProPublica was looking at criminal risk scores that predicted how prisoners were likely to reoffend 6. Justice officials were using these scores and data to decide whether or not a person should be granted parole or early release. It turned out that the error rates on blacks that were labelled by the algorithm incorrectly were twice that of whites. These black individuals actually hadn’t reoffended, but there were mistakes. It was true for the other way around too — white criminals who did reoffend were given more leniency, labelled lower-risk, and then given parole. This audit was not that hard to do. But this should be done more systematically. In our platform, we report the statistical performance of the algorithm, by indicating the errors…


CJC: These are the marks of bias? 


FYL: No, not really. These are just the errors. Bias happens when the numbers look different for blacks and whites, when there are big inconsistencies. If there is a large enough difference, it means that the algorithm is biased. In our perfect world, with every algorithm that you run, there should be alerts to flag what is high-risk to low-risk, so that the user can block the algorithm from being deployed. 


There are also techniques to pre-process or “de-bias” the data. I can present you with two fair alternatives where the risk of bias is less, and then I allow you to pick one of the two. For example, should you make $70,000 less profit to make the AI more fair? Of course, the AI itself can’t make this ethical judgment for you. But our platform can expose these trade-offs and allow human beings to make the decisions. 


We can try a thought experiment and compare a very complicated AI to a human being. At least for the AI, we can filter through how it came to a decision. You and I all have biases accumulated over 20, 30 years of experience. Can we audit that? All our subconscious biases can never fully be explained. So, even though AI seems untrustworthy or unknown right now, I would argue that as it becomes more developed, it will inherently become more fair and explainable than a human being.


It’s not so much about how one day we will have future AI machines that look so much like humans. It’s more about how we can develop empathy to understand the machines, even if the tech is separated from our physical or material reality. That’s where the embodiment helps. 

CJC: That’s a very interesting observation. I suppose you were well acquainted with forecasting, especially in your last role in the civil sector, as Director of the Data Science and AI Division of Govtech. Yue Han, I understand that this entire Movement Laboratory was also set against the backdrop of Singapore’s Smart Nation initiatives. Would you like to bring us into the last segment of Alice, Bob and Eve


TYH: Yes. We called the third research event “Controlling the Pace of the Smart Nation”. Each of the dancers were given a Fitbit, and they were performing a series of handphone gestures over the course of a 10-minute lap. The beat of the metronome would sound like a drill, and it would go from 50 to 100. At the end of the lap, they were completely exhausted, and a Siri-like narrator announces, “You’ve burned so many calories.” Our experiment allowed the audience to control the pace of the metronome, so we added a sense of empathy into this mix.


Teow Yue Han Alice, Bob & Eve (2019)

Alice, Bob & Eve (2019) conceptualised by Teow Yue Han in collaboration with RAW Moves dancers. Photo: Kaier Tan.


FYL: What do you mean by empathy, in this case?


CJC: Through their control of the metronome on an iPad, the participants adopted empathy to make the dancers exhaust themselves less. 


TYH: They had an option to flip it all the way up to 200 — which is humanly impossible to keep up with — but we found that the audience were quite kind and maintained a slow speed. We were interested in exploring what happens when you distribute control to individuals in society, to develop a certain kinaesthetic for the community. 


. . .


To conclude Part 1 and Part 2 of this brief series, we would like to ask a few related questions that were suggested within the Research Guide:


  • How can we calibrate human-machine encounters by isolating, deconstructing and then refining gestures and interactions through a machinic learning process, in service of establishing clearer communication between the two subjects?

  • How can I ever know what the machine system has chosen to select from my data and input, inside the black box?

  • What could be the cultural biases within the human programming inputs and data in the technologies we use everyday? How do we account for these?

  • What could be behind this over-reliance and faith on data to determine aspects of our personal and social lives? Are we able to step back and critically reflect on why we do what we do with all our daily repetitive uses of new technologies?

The introduction to this dialogue was adapted from the “Research Guide for Laboratory Participants” that accompanied RAW Moves and Teow Yue Han’s performance Alice, Bob & Eve. The research guide was written and edited by Nah Dominic, with key concepts from Teow Yue Han. The final work was presented on 22-23 February 2019 at the School of the Arts (SOTA) Gallery. 


  • 1.

    Quinn DuPont and Alana Cattapan, “Alice & Bob: A History of The World’s Most Famous Cryptographic Couple”, Cryptocouple, accessed 11 January 2020, http://cryptocouple.com.

  • 2.

    "The Uncanny Valley" was an essay by pioneer roboticist Masahiro Mori published in 1970 on how he anticipated humans to react and relate to humanoid robots. He traced a path from empathy and affinity to dis-ease, and eventually revulsion. He called this latter region of negative feeling, the “uncanny valley”.

  • 3.

    Read more about issues with Boeing's autopilot and its Maneuvering Characteristics Augmentation System (MCAS), following two deadly crashes: https://www.theverge.com/2019/3/13/18263751/boeing-737-max-8-pilot-complaint-autopilot-mcas

  • 4.

    Feng-Yuan is paraphrasing Tony West, Uber's Chief Legal Officer, who claimed, "Drivers’ work is outside the usual course of Uber’s business, which is serving as a technology platform for several different types of digital marketplaces." Read more: https://time.com/5675637/uber-business-future/

  • 5.

    The "black box" in computing and engineering is a term that refers to how the internal workings of a system are opaque and not visible, even though you can see what the inputs and outputs are.

  • 6.

    Read more about ProPublica's study of bias in criminal risk scores here: https://www.propublica.org/article/bias-in-criminal-risk-scores-is-mathematically-inevitable-researchers-say

Artists and Contributors

Christina J. Chua portrait picture

Christina J. Chua

Christina J. Chua is Co-Founder and Chief Editor of SO-FAR, a hybrid publication, gallery and artist incubator. Prior to founding SO-FAR, she worked at galleries and art fairs throughout Asia representing and exhibiting a spectrum of emerging to blue-chip contemporary artists from around the world. As a writer, she contributed to various international and Singapore art publications. Today, Christina is committed to bridge-building in the Singapore art scene, while developing a new generation of art patrons through her fine art consultancy and education group, Metis Art. With her interests lying at the interstices of business, technology and contemporary art, Christina is also Strategic Advisor of innovation consultancy ArtBizTech.

Teow Yue Han portrait picture

Teow Yue Han

Born in 1987, Singapore, Teow Yue Han received a BFA in Digital Filmmaking at the School of Art, Design and Media, Nanyang Technological University (2012). He later pursued an MA in Fine Art Media at Slade School of Fine Art, University College London (2016), where he was a recipient of the 2016 Julian Sullivan Award. Teow Yue Han’s works explore the interface between video, performance art and technology. He is interested in the way new technologies such as smart cities or artificial intelligence are shaping society, culture and the urban landscape. He creates situations where gestures and social interactions that are informed by these technologies can be interrogated, rehearsed and renewed. Teow is a core member of INTER—MISSION, an art collective focusing on art and technology. He lives and works in Singapore.

Liu Feng-Yuan

Liu Feng-Yuan

Liu Feng-Yuan is CEO and co-founder of BasisAI, an early-stage start-up enabling digital enterprises to build scalable and accountable AI. Prior to this he built a start-up within GovTech to harness data and AI for the public good, as part of Singapore’s Smart Nation initiative.

Nah Dominic

Nah Dominic

Nah Dominic is a Singaporean educator, researcher, dramaturg and writer with an interest in literature education at both tertiary and secondary levels. He is currently Associate Faculty at Singapore University of Social Sciences teaching undergraduate courses in English Literature and Research Assistant at National Institute of Education on a project concerning cosmopolitan approaches to literature education in secondary schools. As a freelance dramaturg, he also works on both theatre and dance projects.