SO-FAR

Skip to main content

The Dogma in the Code: A Conversation with Libby Heaney and Gavin Starks

Artist and quantum physicist Libby Heaney discusses the political implications of AI’s acquisition of language with Gavin Starks, a leading entrepreneur in the domain of open data.

The classic text of Lady Chatterley's Lover by D.H. Lawrence next to its AI adaption, Lady Chatterley's Tinderbot, composed by Libby Heaney.

The classic text of Lady Chatterley's Lover by D.H. Lawrence next to its AI adaption, Lady Chatterley's Tinderbot, composed by Libby Heaney.

I was admittedly late to the party, and only recently watched the Hollywood sensation that so popularised Singapore, Crazy Rich Asians . In case you haven’t seen it, I’ll summarise (without spoiling) the flick shortly: well-educated, American-born Chinese girl with a working-class single mother falls in love with a Singaporean-born Chinese scion from a mega-wealthy family, the Youngs, helmed by a formidable matriarch who safeguards her generational heritage. Filial expectations, power and money pitted against integrity, simplicity and true love made for an altogether compelling romantic comedy that resonated with the Asian American and global Asian communities. 


But what was most pertinent to me about the film besides its overt, glamorous depiction of my nation-state was its much simpler opening scene set in London, 1995. The family arrive rather unceremoniously in a stately hotel lobby, drenched from a rainstorm. The concierge refuses to put them up, recalling the bigotry of a bygone century in his condescension, “May I suggest exploring Chinatown?” After Mrs. Young makes a furious phone call to her husband, the owner of the hotel rushes downstairs and, to the mortification of the concierge, announces with joyous welcome that the Youngs have purchased the hotel. Nothing but pleased as she is escorted to the best suite, the English-speaking Chinese woman elegantly asserts herself in a post-colonial era. 


Over the past five ascendant decades of Singapore’s independence, the nation’s political and economic power was concentrated in the hands of the Straits Chinese [1], whom the film depicts. Many of them are fluent in English, send their children to study at Oxbridge, and even inherit the faith of the Methodist and Anglican missionaries who founded still extant schools and parishes in the former entrepôt. Such is the paradigm that the Youngs uphold — one that is begotten not only from its mercantile roots in South China, but also traces its networked lineage to another, more intangible, but nonetheless powerful mother — Mother England. This paradigm is best articulated in the peculiar lilt of the Youngs’ speech: their accent is almost, but not quite, British. It is a Britishness inflected and borne through the Straits of Malacca.


* * *


Libby Heaney is a British artist and quantum physicist working with moving image, performance and interactive installation. Heaney lifts material and data from pop culture, politics and literature, and through a mischievous engineering of artificial intelligence (AI) bots, quantum computing and virtual reality, she unsettles conventions and questions orthodoxies. The same year Crazy Rich Asians was released, and as Brexit was well underway, she fashioned a chatbot according to the canonical version of Britishness prescribed by the United Kingdom citizenship test. It was wry, and prescient, and not only for Brits. Titled Britbot , the artwork resonated with suspicions (or to some, hopes) that the UK would soon look to the edges of the former British empire — Singapore — for rebirthed networks and models [2].


I had a conversation with the artist and Gavin Starks, a founder and adviser to over a dozen organisations in data infrastructure, policy, science and communications. As our dialogue swerved from another forbidden love story, Lady Chatterley’s Lover , to the relevance of the Commonwealth to a post-Brexit Britain, what emerged was a more nuanced understanding of how AI is serving dogma, politics and prevailing societal paradigms.


Christina J. Chua: SO-FAR is an interdisciplinary platform, and I’m really pleased to bring you both together. We host dialogues between contemporary artists and individuals working in technology and other domains. There are several reasons for this, beginning with a personal frustration with the opacity of “artspeak”; an enlarging of the artists’ audience; an expansion of the discourse being healthy for their practice itself; and finally, the potentialities of their research lending meaningful difference and divergence to otherwise ordinary modes of thinking in mainstream technology. 


Gavin Starks: I think we’ve siloed our thinking heavily over the last few hundred years since the Industrial Revolution. Before that, physics was natural philosophy; astronomy had its origins in astrology. It feels like over the last few decades we have re-engaged in truly multi-disciplinary work, and I wonder why did we silo so much in the first place? 


I studied astrophysics, then did a Masters in Electronic Music, and am constantly drawing from both disciplines, pulling across both language and ideas. 


At the Open Data Institute (which was partly backed by the UK government), we launched a Data as Culture programme. When we announced it, we had quite a difficult call with someone at 10 Downing Street — they couldn’t see why something with the word data in it also had something to do with art. It took a while for them to understand that this is part of our culture now: we’re not sitting as individual “human units” with machines sitting parallel to us. We’re surrounded by data — we all have our own personal data shadows. We need to understand how to relate to this. And that’s before we get too far into implementations, such as the use of AI. How does all this data make us feel? How does it change how we act? 


I think we are still at the beginning, trying to form some kind of baseline language. We have done a lot of work to understand which words to use, but it takes years to unpack them and put them back together. We can draw parallels with the invention of other things. For example, if you go back to some of the textbooks on the invention of electricity, you’d see a picture of Michael Faraday, and beneath it, there’d be the title “electrician”. Well, yes — he was an electrician, but not in the contemporary sense — he wasn’t going to wire up your house! It was quite a different concept. Or with Tim Berners-Lee who was interviewed on TV, the caption they put under him was “web developer” [3]. [Laughs]


We are still at the beginning, trying to form some kind of baseline language.

So, one thing that’s very inspiring about Libby’s work is this continuous cross and multi-disciplinary background that she’s bringing into the discourse. 


CJC: Exactly. We’re talking about language today and how Libby has dealt with that in her practice. 


GS: Now Libby, what led you to work with chatbots?


Libby Heaney: Let me tell you about a slightly older work first, which will help to contextualise Britbot. It’s called Lady Chatterley’s Tinderbot and used non-human actors and machines to question romantic codes on Tinder. I wanted to compare dating apps with an older technology — a literary novel — which you don’t really think of as a technology anymore. I created three bots based on characters from D.H. Lawrence’s Lady Chatterley’s Lover : Oliver Mellors, the gamekeeper who Lady Chatterley later has an affair with; her husband Clifford, whom she cheats on; and Lady Chatterley, the main protagonist of the novel. These bots spoke to people on Tinder for about a year using only dialogue from the novel. With this work, I’d already been exploring how text can help us spot these categories that are created through technology, how we become aware of them, how we can change, open or deconstruct them, and how the work could jolt us out of naturalising these categories.


CJC: Why did you choose that particular text? I actually introduced Issue 2: Artificial Intelligence with Mary Shelley’s Frankenstein , another English novel. 


LH: I knew I wanted to use a book, because I was using language. Lady Chatterley’s Lover contains themes of adultery, passion and other romantic codes that Lawrence tried to disrupt, even if he still played into them in a different way. Tinder is doing something similar so it felt really natural to pair them together. Lawrence wrote the novel in 1928, and it was quite infamous because the British government censored it for so-called “obscene passages” until 1960 [4]. It was written against the backdrop of World War I, so there was also this sense of rising nationalism. I felt that the underlying themes of class, gender and nationalism were still important today, even if it’s 90 years after it was written. History repeats itself even though the modes of communication change over time. 


Lady-Chatterleys-Tinderbot

An audience member reads the chat messages of Lady Chatterley's Tinderbot.


GS: I think it’s a really fascinating project. Social media reveals elements of human nature which are not very pleasant. In structured societies, particularly in religious or state structures, we have tried to codify these behaviours out of the system to push against the darker sides of our human nature. 


Online, we’ve ignored these things at our peril. We have not invested as much in developing — or translating — our social norms, and I think we now see them reflected back in quite a brutal way… and most recently, politically. The underbelly of our negative sentiments is not going away with the so-called maturity of modern society or democracy. And these points of challenge, intersection, and inflection — of which Lady Chatterley’s Lover is one of them — provoke us to begin mashing these things together. At the same time, we are always trying to outsource our morality to some other kind of code, whether that’s a religious, political, or social code, or now literally in code. 


So, your artwork Britbot emerged out of this work? 


LH: Yes, there was an open call by Sky Arts, the television channel. I proposed a work using a natural language processing system which would sit online with a biased version of Britishness related to the UK Government’s British citizenship test [5]. The test is a largely white, male, privileged version of British history and culture. There’s hardly any black history in it, no history of the working classes, no mention at all of any LGBTQ issues. There’s only 16% women mentioned in the study book — almost 200 men and around 37 women, 6 of whom are Henry VIII’s wives. It’s a very canonical version of British history and culture, even though it shouldn’t be. 


We are always trying to outsource our morality to some other kind of code, whether that’s a religious, political, or social code, or now literally in code

CJC: The test is used until today? 


LH: There’s a different version each year. But it changes most of all when there’s a new government. The Labour Party introduced it in 2003, not so much as a test of British history and culture, but more of a “How To” practical guide: how to pay your taxes, how to send letters, how you go to the hospital… And then when the Coalition government was elected in 2011, David Cameron changed it to a test of British history and culture. But then he went on David Letterman’s talk show in the US and failed questions from the test. There’s a very nice video of him online which I like to show in my talks.


I trained Britbot’s AI model to be biased, and then it learned as people spoke to it online. If machine learning is solely trained on these standard versions of history and culture — like the citizenship test — then it will simply replicate these societal biases. However, Britbot’s data set and consequent discussions with people changed based on what people were saying to it. The normative versions of British history and culture were deconstructed. Not just British people, but anyone from all over the world could speak to it, which was really important. 


CJC: Speaking of “around the world”, Singapore is in the Commonwealth and has this odd adoration for Britain. We’re celebrating our so-called Bicentennial year, 200 years since Sir Raffles “discovered” Singapore. It’s odd — I’ve lived in Singapore, Canada and Fiji during different times in my life, and I always witnessed this unusual fascination with the Crown.


LH: By the way, the developers who helped me make Britbot were from Canada, which is also in the Commonwealth. Our queen is still on their coin…


GS: To build on that, I was at the Commonwealth Heads of Government meeting in Malta in 2015, and I was quite shocked — albeit not surprised — about the lack of discussion on anything digital, web or tech. Fast-forward a couple years, and they asked me to write a paper about what a digital strategy for the Commonwealth could look like. It’s still taking years to play out — there’s a different culture, or mode of thinking about collaboration at that level which is very “legacy”, and in many ways hasn’t evolved to be “fit” for our digitised society. 


The Commonwealth is almost atrophying in its own way, and there’s nobody driving forward its modernisation as an institution. I’m not entirely sure, but there’s a lot of talk about how if we’re getting out of one network in Europe, then maybe we should be rebooting the Commonwealth as the perfect alternative. But I feel that it is completely unequipped to engage this digital society in which we live in today.


LH: Well, Commonwealth citizens who interacted with Britbot would often take a piss — they might say what they thought of Britain which was quite negative. However, some people did want to emigrate here. 


I did use “Britishness” lightly — Britishness that isn’t just defined within , but defined by relation and without as well. The nation-state is kind of a fiction, isn’t it? And there are other ways we could configure the world that may be better now. I feel that other people should define Britishness as much as British people — and maybe even more so. When you’re inside something, you may not really have a clear view of it, as much as someone who’s further away from it.  


Libby Heaney, Britbot, 2018, Installation view

Libby Heaney, Britbot, 2018, Installation view


As Britbot collected data from its conversations with people, I didn’t take anything out of it or censor it at all. When you speak to Britbot, one of the things it asks for initially is your ethnicity and gender, which are sensitive pieces of information. Even though it also says that it’s happy to speak to anyone, that’s to show how these facets are intricately linked with conceptions of national identity. People of all different types can get offended or have their views confirmed, and that’s dependent on how the chatbot has learned, and how the machine chooses to respond to an individual. 


CJC: Is that failure of AI-human communication one of the intended consequences to Britbot — these hiccups, breakdowns, and the humour or offence that could emerge out of it? 


LH: That was certainly part of it. Humour is in my work a lot. It was really important that these interactions were in flux. Britbot functioned on different layers, like how it would sometimes say, “The Queen’s a bitch”, which obviously someone who thought they were a bit of a comedian said to it at some point. But when it says that phrase, it would shock people. Sometimes people would get really confused with what Britbot was saying, and that encouraged it to get confused as well. Britbot’s position and language was always moving around, revealing how these data cultures or norms never really settle. These unexpected moments generated lots of discussion about the technology — whether AI works or not, and if it can make a mistake.


CJC: How did you programme it to fluctuate in this way? 


LH: I had some rule-based questions to keep the conversation on Britishness. The answers to those would seed a generative model to allow some improvisation, before it would go back to the rule-based questions [6]. So there was guidance and improvisation, with some randomness and uncertainty built into the system. It could have surprising outcomes, which was part of the point of it — to start disrupting people’s own thought patterns and norms around these discussions. AI is usually used by corporations to make accurate predictions, to categorise things in order to sell us stuff. When you move away from categories towards nonsense, it subverts the “proper” function of these systems, which I believe is a political act.


GS: Talking about norms, we are now in this position that never existed before — a sort of bashing together of globalisation with nationalism, with people trying to protect norms while using globalised tools and networks to do so. It’s quite the paradox.


When you move away from categories towards nonsense, it subverts the “proper” function of these systems, which I believe is a political act.

LH: My artwork disrupts language, pulls it apart, by allowing multiple people — with both global and nationalist views — to talk about Britishness, or having Lady Chatterley interject random words to Tinder users. It’s all about rupturing these categories, perhaps by using the power of the collective, to allow other meanings and potentialities to emerge instead. I’m constantly going for this weaving, blurring, mashing, while allowing other things to come out. It’s a bit like Dada poetry, but not quite as nihilistic. 


CJC: Do you feel Dada or playfulness can be a mode in which we can find meaning, in our search for that “baseline language” that Gavin spoke about?


LH: Well, Dada did render itself obsolete in the end. The Dada poets Hugo Ball and Tristan Tzara [7], in the wake of rising nationalism and World War I, created these sound or simultaneous poems which were nonsense. They performed them at these raucous events in Zurich called the Cabaret Voltaire. They took these rational, political meanings that were very fixed and normative, and brought them back into this material flow so they became only sounds. That was quite a nihilistic act — it rendered meaning obsolete with nowhere else to go.


In my own art practice, I don’t want to totally undo meaning and then just leave it there. For another experiment, which resulted in my artwork Euro(re)vision , I began working with the English data set of the political debates from the House of Commons, and a German data set from the German Bundestag. I created a hybrid data set of English and German, and expected the AI model to “naturally” start combining the two languages. That was my aim. But it turned out to be quite difficult to blend them together. These recurrent neural networks didn’t seem to want to combine the languages, which I found really fascinating. Then, as I was playing with the parameters of the neural networks, especially at the level of characters, I could have them build and construct words which didn’t exist before, combining both English and German. The default settings of these neural networks tend to represent the language as it is already constructed, reflecting reality back to us “as is”. I worked with different parameters like the temperature, the correlation for the statistics between characters, so that the network would unlearn or deconstruct the current political language. Therefore, unlike Dada, I feel this work is not just nihilistic. It proposes something beyond national borders, a new language that transcends countries —


GS: A machine Esperanto? [8]


Heaney: Exactly. It proposes something else, whatever that may be. 


Libby Heaney, Euro(re)vision), 2019, Still

Libby Heaney, Euro(re)vision), 2019, Still


GS: Coming back to Britbot — from your experience, were people usually expecting to be interacting with a “human” AI rather than an “artificial” intelligence? 


LH: Yes, people did expect it to be human-like. Because of their interactions with Alexa, they imagined it was going to be quite like Alexa with a fixed identity. At the Barbican, some men were really aggressive with me because I hadn’t designed Britbot with a personality. I was explaining to them that Britbot’s “personality” ebbs and flows, and that was the point. They were developers, and they were hung up with me that I hadn’t thought about the “best practice”!


GS: Well, we know there’s a rulebook for being human-like and of course machines must be cast in our image! [Laughs]


I feel this work is not just nihilistic. It proposes something beyond national borders, a new language that transcends countries.

LH: In some ways, Britbot mirrors who it is talking to. If you had a boring conversation with Britbot, it would not say much back without copying you. Still, I inserted some randomness into it, so it could just go off on tangents. It would fragment, or split that mirror a little bit. It’s not too linear. 


GS: The reaction of those developers is a great example of siloed thinking emerging extremely quickly. We’ve got expectations. Recent tech innovations like Alexa have set the norm, and then that norm is “protected” by individuals. But should our expectations about this new “intelligence” be human, living, or something that we haven’t yet encountered?


LH: Or should it be playful, like Britbot? Some people liked to play with it. Certain individuals have repeatedly gone back and had really interesting conversations with the system — provoking and testing it. Humans just want to feel in control. Like Lady Chatterley’s Lover , how to allow passion, the unexpected, or intuition, take over your senses — that is kind of scary for people. 


GS: Have you ever done a psychological test on Britbot? I can just imagine a psychology panel asking, “Does it have a personality trait?”


LH: No I haven’t yet, but we could! It reminds me of a chatbot from the 60s, Eliza, one of the first ones that was rule-based. It pretended to be a therapist. The same person also created a chatbot called Parry that was supposed to be quite schizophrenic and jumped around a lot. I feel like Britbot could be like that. 


CJC: It’s almost refreshing to think of an AI that doesn’t need to be taken seriously or that’s not necessarily offering us solutions. 


LH: A lot of my work with AI is about moving away from using it “properly”, or to make accurate predictions about the world. It’s also about how much chaotic randomness and deconstruction of meaning that people can accept in their life. This is something that we can only experiment with through art. At the same time, outside of art, humans always want to defer back to some rule-based system, because the world is quite unpredictable.


GS: We want to feel safe. An underlying issue is that when people are interacting with AI, it is “threatening”: it’s unknown, and therefore unsafe. Who knows what it might do next? Is it a human entity, a human-shaped intelligence? Is it going to be friendly or hostile? Is it going to be racist, aggressive, or misogynistic, or dull, or funny? Well, it might be none of those things — it might not be human at all — and so why do we have that expectation? Why do we keep referring to AI within this human context of intelligence, rather than a different category? 


LH: In some sense, we as humans build these systems, so there’s all of the decision-making that goes into creating the neural network, and the data sets which are based on our behaviours. 


GS: I feel it’s all about who is building these systems in the first place, and how are they training the machine. 


LH: Yes, all the biases. 


GS: Not just biases... I’d go further and ask, “What is the dogma in the code? What is the dogma in the data?” These are cultural issues containing everything from “bias” to  “censorship”. When someone is training a neural net on a particular category, are they consciously self-censoring; is the institution that is funding them helping to create a bias? I think the word “bias” is possibly too light. 


CJC: Perhaps we should also discuss the consequences?


GS: A seemingly simple example: Facebook sometimes presents images of your memories from a couple years ago on your timeline. There was a particular instance when a woman had lost her child, and the system brought back all these images from her past, and published them all at once [9]. She wrote an entire post about it and that eventually filtered back to the Facebook development team. They said, “We never thought about that use case.” But why not? For me, that would be a blindingly obvious category of use cases: “Bad things happen in people’s pasts that they might not want reminding of”. A hypothesis for that is the developers were probably all young, white men sitting in San Francisco. They were likely not parents and would not have had visibility of that potential life event. 


We’ve been thinking in this technology-centred mode: “We can build this, therefore we should.” This paradigm is very deeply baked into surveillance capitalism, and we can apply these technologies to particular issues, but we must ask what are the unintended consequences. We see a proliferation of centres and institutions around data ethics and AI, and some very respected people speculating if AI is the end or beginning of things? 



Underpinning all this is who or what cultural norms are being programmed into these systems. Who is or what is the culture of the people who are writing the algorithms or choosing the data? Because this is where it goes way beyond “data bias” — which could simply mean that we didn’t collect enough data. It goes into the subconscious of what were people thinking — their cognitive biases — what was their social environment, with whom were they communicating when they were building systems in the first place, and with whose moral compass were they guided?



  • 1.

    For more on Chinese privilege in Singapore, read "If You Talk Like a Coloniser and Eat Like a Coloniser..." https://newnaratif.com/research/if-you-talk-like-a-coloniser-and-eat-like-a-coloniser/share/xuna/f7e7ab490f2e8981228ca95a758349ea/

  • 2.

    There are several high-profile British businessmen who have looked to Singapore's independence as a model for Brexit: https://www.msn.com/en-sg/news/world/sir-martin-sorrell-uk-should-copy-singapore-after-brexit/ar-BBSBNEb. Brexiteer James Dyson shifted his company's headquarters to Singapore, although he claimed his decision was not linked to Brexit: https://www.scmp.com/week-asia/economics/article/2183450/real-reason-dysons-swapping-brexit-britain-singapore. this viewpoint is not without its critics: https://www.theguardian.com/politics/2019/jan/02/why-the-singapore-model-wont-work-for-the-uk-post-brexit.

  • 3.

    This is documented under the Reddit thread "ProgrammerHumor": https://www.reddit.com/r/ProgrammerHumor/comments/28wqvi/tim_bernerslee_web_developer/ Of course, Sir Tim Berners-Lee is best known as the inventor of the World Wide Web. As the founding CEO of Berners-Lee's non-profit the Open Data Institute, Gavin Starks worked closely with him.

  • 4.

    The Obscene Publications Act of 1959 reformed UK law on publishing obscene materials. The following year, the first and most high-profile trial under this Act was that of Penguin Books for publishing “Lady Chatterley’s Lover”. Penguin leaned on Section 4 of the Act, citing “public good” in their defence, with prominent authors and literary critics testifying that the book had literary merit. The verdict was not guilty, and it was seen as ushering in a new era of sexual freedom.

  • 5.

    Have a go at a couple of sample questions from the citizenship test here: https://www.bbc.com/news/uk-49485612

  • 6.

    Often used in natural language processing, a rule-based system depends on a human-constructed set of rules, an inference or reasoning engine that applies these rules according to logic, working memory and a user interface.

  • 7.

    Dada or Dadaism arose in Berlin, Zurich, Paris, New York and other cities in the West around 1915-20, as artists and writers responded to the meaningless violence of World War I, what they saw as the horrific culmination of modern capitalism. They turned instead to irrationality, randomness and the subconscious to stimulate avant-garde forms in the visual arts, poetry and music. The movement had a major influence on Surrealism, Fluxus, and many other anarchic or counter-culture movements.

  • 8.

    Esperanto, meaning “one who hopes”, is an international auxiliary language created by Polish ophthalmologist L.L. Zamenhof in the late 1800s, as an attempt to foster world peace amongst its speakers.

  • 9.

    Read more about Facebook's Year in Review blunders here: https://www.theguardian.com/technology/2014/dec/29/facebook-apologises-over-cruel-year-in-review-clips.

Artists and Contributors

Christina J. Chua portrait picture

Christina J. Chua

Christina J. Chua is Co-Founder and Chief Editor of SO-FAR, a hybrid publication, gallery and artist incubator. Prior to founding SO-FAR, she worked at galleries and art fairs throughout Asia representing and exhibiting a spectrum of emerging to blue-chip contemporary artists from around the world. As a writer, she contributed to various international and Singapore art publications. Today, Christina is committed to bridge-building in the Singapore art scene, while developing a new generation of art patrons through her fine art consultancy and education group, Metis Art. With her interests lying at the interstices of business, technology and contemporary art, Christina is also Strategic Advisor of innovation consultancy ArtBizTech.

Gavin Starks

Gavin Starks

On behalf of HM Treasury, Gavin co-chaired the development of Open Banking Standard, leading banks, trade associations, startups, regulators and consumer rights organisations to lay the foundations for new regulation. He works internationally with public and private sector organisations, with Ministers, C-suite leaders and startup founders. He was founding CEO of the Open Data Institute, working with Sir Nigel Shadbolt and Sir Tim Berners-Lee, sat on the GLA Smart London and the Ministry of Justice Data Science and Evidence boards, and provided evidence to a Parliamentary Select Committee on ‘Big Data’. As a serial entrepreneur he has focussed on creating economic, environmental and social impact. His work has led to his recognition as one of the most influential people in data, awards for innovation and expertise, and frequent international presentations on innovation, the web of data and its impact on society.

Libby Heaney

Libby Heaney

Libby Heaney is an artist, researcher and quantum physicist working at the intersection of art, technology and science. She pioneers quantum computational and AI methods across moving image, performance and interactive experiences to playfully reexamine systems and question orthodoxies. In 2015, Heaney completed a MA in Art and Science at Central St. Martins and she has been working with the Information Experience Design course at the Royal College of Art. She has exhibited her artwork widely in the UK and Europe, including a solo exhibition as part of the 2017 EU capital of culture in Aarhus, and in group shows at Tate Modern, ICA London, V&A, the Barbican, Somerset House, as well as at festivals such as Sonar+D, Ars Electronica, amongst many others. She lives and works in London.