SO-FAR

Skip to main content

Towards a New Relational Consciousness

Media artist Brandon Tay, NTU CCA curator Magdalena Magiera and Google curator Kenric McDowell discuss destabilising the human-centred design paradigm.

Brandon Tay Facade 2019

Brandon Tay, Facade, 2018, Still from an interactive app.

I’ve known one of Singapore’s leading creatives, Brandon Tay, for many years. He’s always been intrigued by consciousness, its limits and bounds — its outer and inner reaches. In my early twenties, I remember watching Richard Linklater’s Waking Life in his living room, discussing psychedelics, liminality, and spirituality. As we grew older, I followed his multidisciplinary practice from a distance, through Instagram or Vimeo. Sometimes I’d visit the warehouse parties where he would supply other-worldly visuals; other times I would marvel at his wrapping of nearly every museum building in Singapore with hallucinogenic projections that assaulted their staid, colonial architecture with dissolving layers of neon hybridity. It was as if Brandon was working some synthetic magic inwards, from the city’s underground, and outwards, upon its urban skin…


After noticing his collaboration with artist-choreographer Choy Ka Fai, Unbearable Darkness , where Choy resurrected the spirit of Butoh master Tatsumi Hijikata through Brandon’s delirious, digital visualisations, I felt it was the right time for us to reconnect. I reached out to Brandon, pitching him my ideas about artificial intelligence (AI) and asking him if he’d like to co-host a technologist on the platform along with curator Magdalena Magiera, whom at the time was formulating her ideas for an exhibition on post-humanism at the NTU Centre for Contemporary Art [1]. He immediately responded, showing me a Rune Soup podcast that featured a certain Kenric McDowell. “If there was anyone I’d like to speak to, it’d be Kenric,” he said with a lot of certainty. 


This was the dialogue that ensued. The ideas that the three have contributed form the core of this Issue’s failed attempt at an anthropocentric understanding of AI. Yet this failure is also a beginning — a provocation to imagine a consciousness (either artificial or our own) beyond anything that we have previously known in our existing paradigm, thus far. I have also chosen to conclude the conversation with a spiralling quote from Waking Life by quantum chemist Eamonn Healey on technological evolution. Read on till the end to awaken another paradigm…


Christina J. Chua: Kenric, do you mind beginning by introducing your programme, Artists + Machine Intelligence (AMI) at Google? 


Kenric McDowell: We’re a research-based programme that is based in Google AI, but we’re now officially collaborating with Google Arts & Culture to give grants and technical support to artists working with machine learning. This is a significant year for us, because we’re really going global in our approach and have just closed our first open call. 


AMI is also based in Google AI’s Strategy group, within Research. Some questions that we are looking at are about how artificial intelligence can have a non-anthropocentric definition, and how that could affect the design strategies that we might employ. 


We’re working to deconstruct some of the assumptions that have been snuck in under the human-centred design banner, through our engagements with philosophers and artists.

Magdalena Magiera: Can you expand on these design strategies?


KMD: One basic example is how we tend to think of designing for any kind of software as being for a single individual — for one person, but AI systems that learn about user behaviour will inevitably have to accommodate relationships that humans have with non-humans. Say if I have a service dog, AI software should understand that when I travel around with my dog, that might change the type of spaces I could go to. A map app that uses AI needs to know that about me in order to do a better job. In that context, what does that imply about the definition of myself, if I enter into partnerships with non-humans? Why are we defining all of our user behaviour as the behaviour of a single individual, when in fact I’m constantly in relationship with other people, with larger structures, or other species? We’re working to deconstruct some of the assumptions that have been snuck in under the human-centred design banner, through our engagements with philosophers and artists. This is an important theme that has come up over and over, and we’re looking at ways to start designing more holistically, not just in terms of redefining user experience but also in terms of actual applications too. 


Brandon Tay: So when you work with artists, are you looking for insights and new strategies? 


KMD: That’s a good question. Artists and programmers are separate communities that actually have a lot in common. The way I’m approaching it is to build off the discourse with artists and philosophers that has inspired new ways of thinking about design, and then those new ways of thinking implicate different domains of expertise that need to be brought in as well. My next goal is to have these conversations between artists, philosophers and technologists, but also involve botanists, anthropologists, or others too. 


Up until now, although there’s been a lot of inspiring ideation, we haven’t been able to do real research using traditional methods to validate a lot of the conversations that we’ve had. Now we have the resources to do that, so we can fulfill our ideal to bridge research and art-making — using ideas that come from creative and philosophical conversations to inform how design is done, and introducing those new ideas into Google or the larger human-centred design community.


Brandon Tay Facade-2019

Brandon Tay, Facade, 2018, Still from an interactive app.


MM: In your field of research, do you feel that AI is a tool for artists — almost like a pencil — to create art, or is it more like a way of living, or an ideology?


KMD: It’s very easy to play the hard computer science card and say, “Oh, it’s just pattern-recognising algorithms,” to sort of diffuse some of the hyperbolic interpretations that tend to happen right now within the cultural sphere. We get a lot of narrativising or mythologising of AI, and it’s quite common for engineers or people that work closely with the material to want to push against that. But it’s neither one nor the other of those two definitions: it’s not strictly mere math problems, and it’s not strictly a cultural phenomenon. 


What I find more interesting is how AI has become a way for us to imagine and model ourselves. More often than not, what surfaces in the definition of AI is a definition of humanity — of what we think superhuman or ‘beyond’ human intelligence would be. This is why mythology becomes really important, because we tend to revert to definitions of super intelligent beings that we’ve been given through culture. In a way, AI as a psychosocial phenomenon becomes a reflection of our assumptions about ourselves, and what we can project onto it. Having said that, there is a limit to what it can do, and a lot of the technical breakthroughs or ways of approaching computational problems within machine learning really do borrow from the structures of the human brain as a living organism. So AI is an alien form of intelligence, but at the same time it sticks to all of our other definitions of intelligence. That is the most interesting thing about it, because those assumptions and definitions get played out at scale when they’re not investigated closely by theorists or philosophers. 


AI seems to present not just a mirror, but a window that seems to be a mirror.

BT: Do you think that in order to define artificial consciousness, there has to be a definition of consciousness itself first? It’s almost like, when you think about another consciousness, in trying to define it, the more you reflect your own consciousness. There is this existential tension where I ask myself, “What am I thinking, while I’m thinking, and what is thinking?” AI seems to present not just a mirror, but a window that seems to be a mirror.


KMD: The answer to your question is yes. I’ve also observed how when we try to make models of consciousness, or build tools that can model ourselves or other objects, in reality we can never fully extract consciousness or identity into one individual entity. It’s always relational, and any kind of system that recognises patterns and has a broad enough sensory input will start to understand those patterns of relation in sociality. Those relational patterns may indeed be a part of consciousness itself. Trying to define consciousness in an isolated system or for a machine always puts me back into relation to my own consciousness.


BT: So you’re saying that in the field of computer science, those definitions are being redefined to place AI in a relational paradigm, rather than an isolationist or even a neoplatonic paradigm which says, “I am just one person unto myself…” Perhaps if Google is making that paradigm shift, then it’s only natural that it would trickle down to the way we all use our tools.


KMD: Yes, in some ways…


I’d also like to hear your perspectives — is this individualistic, alienated consciousness mostly a Western trope? Is that something that is predominant in our design theory because of the history of our philosophy?


BT: On one layer, in Singapore we have a colonial heritage that seeps into our culture. To another degree, I’m aware that when I use technology, it’s like an ‘input-output’, very linear, or singular path to creation or art-making. I guess that makes it difficult to switch to a more relational mindset. 


CJC: So you’re saying that you can’t get away from the same human or user-centred design thinking, even in your art-making or process?


BT: Yes. I find myself creating something that’s almost like a product. What I find exciting about Kenric’s observations is the possibility that a work can go beyond a singular object, product or performance. It becomes something more networked. So it could be an art experience, art object or entity that exists in relation to other art entities. That fundamental shift to think about technology in relation to art as different from a painting — the former is intrinsically networked, rather than a singular object.


MM: What I see in your work is this idea of merging Western and Eastern concepts, such as with your collaboration with Choy Ka Fai, Unbearable Darkness


BT: In terms of an ‘East-meets-West’ in the work I do, I guess it’s about cohesion, rather than creating dialogues in this stereotypical sense. I feel myself lifting parts from here and there —


MM: Then in that sense, there is no dialogue. It’s layering. It’s creating the über-human —


BT: Yes. Like a Frankenstein. 


MM: Kind of. You’re taking parts from different systems, ideologies, ways of dancing, moving, and giving the person who’s using the machine who and what and how they would like to be. 


BT: But still, the process is still about the agency of a single author. It’s more interesting to think about how authorship with AI could be bi-directional rather than singular. 


Brandon Tay Facade-2019

Brandon Tay, Facade, 2018, Still from an interactive app.


KMD: However, the results might be less interesting than we expect. One thing that comes to mind is how these systems where people are having interactions with the AI end up giving you a kind of statistical flattening of a data set. When the neural net is trained on a type of data, watching it create output can be kind of disappointing because you end up getting something rather average — whether it’s the average of literature, or painting, for example. One of the challenges I see is for artists who hope to extract something interesting or useful from this tool is how they will fight against that in various steps of the process. So this idea of bi-directionality can be something that frees you, but it can also be something that constrains you in terms of how the system can almost be ‘too good’ at understanding the whole corpus, rather than it having an idiosyncratic focus on one area or another. 


BT: Perhaps that’s true in most cases, but what about something like Deep Dream? This has produced really idiosyncratic or novel ways of seeing. I think there will definitely always be avenues like that at the edges of machine learning. I hope to see AI interacting with our senses, be it sight or sound, or smell… That will interface with our consciousness in a different way. 


KMD: Great that you mention Deep Dream, because it was something that had a very specific, idiosyncratic character which, at the time when it emerged, was what made it interesting. People were both looking at its technical, as well as its aesthetic properties. Then when everyone started to understand what that was, there was a period where it was less interesting. As time has gone by, there was a different curve to the integration of the aesthetic properties of these tools based on how they are spreading online and the ways that the images are consumed. Now, Deep Dream is also starting to carry more meaning as a historical moment in the development of machine learning. It has a sort of historical embeddedness, which was missing from its very first conception [2].


CJC: Magda, in your work as a curator, how are you approaching these technologies like Deep Dream or machine learning?  


MM: I actually question myself quite a lot in relation to it. Perhaps I see myself with a more ‘old school’ approach — trying to humanise it, make it acceptable. But I would like to break outside of that. At the same time, I don’t see it only as a tool for generating content, archives or patterns. It really depends on which project I am working on, and these different facets then become more important or less relevant. On a deeper level, I know that I don’t know . And that makes it kind of scary, yet very fascinating and extremely exciting. There is an overwhelming fear when I consider what could go wrong with it. It is a reflection of who we are, where we are heading, and what we want.


AI becomes like an inflection point to an already existing matrix of living intelligences, the plants and animals. This de-centering is probably one of the most valuable things that can be done.

KMD: It’s a doomed cause to try and humanise AI. It’s not possible, because it’s not human. Even if it’s very simple right now, when it becomes more complex, that’s not going to necessarily mean that it becomes more human-like. The question of how do we relate or communicate our desires to these other types of intelligences that are ostensibly being produced in order to fulfill those desires — can that actually happen, or will they always misunderstand us and we will always misunderstand them? The most fascinating gestures that I see philosophically are through my work with Benjamin Bratton. He has thought of how we can start to see ourselves as uncanny when presented with these alien forms of intelligence. He calls it the “spider problem”, where AI is like a spider with a certain kind of intelligence, but it’s a totally unrelated intelligence to our own. 


I think this is a great opportunity to de-centre ourselves. We can experience that as horror but we can also experience that as the potential of living in relationship to other species. In that sense, AI becomes like an inflection point to an already existing matrix of living intelligences, the plants and animals. This de-centering is probably one of the most valuable things that can be done. However,  the way that it’s often done is through the accelerationist idea of de-centering, upholding this other non-human entity. 


BT: Do you think looking at how archaic cultures approached the non-human would be a useful way to understand AI now?


KMD: Those are certainly the most refined models that we have — those long-existing, archaic models. From the point of view of someone from the Abrahamic, Judeo-Christian West, the idea of importing our model of the non-human into AI is really scary because the dominant paradigm of non-human super intelligence is a punishing God that surveils all. So this is a good moment to evaluate whether our models of non-human intelligence are serving us. If the one that we have isn’t the most useful, how do we have an authentic engagement with an alternative?


BT: What are the testbeds or barometers for this that you have seen? 


KMD: It’s easy to use Japan as an example. You see that a lot when people present how the Japanese have a seamless relationship and seeming lack of anxiety about interaction with robots, in elderly communities, for example [3]. But there are other ways of looking at it. I was working with a student whose project was situated through the lens of the African-American community, and their historical relationship with automation. They had been the ones that worked the fields — they were the labour — and when they were replaced by machines, there was almost an identification with automation, versus a fear of automation as a threat. If you’re in a dialectical power relationship and in control of the people who are enslaved to work the fields, and then those people are replaced by automation, then you might have a fear of automation thinking that it might take your position. The fear of a robot uprising could just be a sublimated fear of the oppressed in society overthrowing you. Whereas from the point of view of the oppressed, there could potentially be some identification or solidarity with the automating forces through a different historical, cultural lens. 


Brandon Tay Facade-2019-Detail

Brandon Tay, Facade, 2018, Still from an interactive app.


A part of the problem is simply that all of the production is happening in one place, and all of these discursive framings are coming from one cultural milieu which is largely Silicon Valley, West Coast of the United States. But clearly there are like other ways of thinking that have to do with other historical relationships to automation or to non-human intelligence.


CJC: Most of the trepidation around AI is really just job uncertainty, and it’s intriguing how your student was able to link this a predominantly white paradigm. 


KMD: There’s a lot of people now who seem to be afraid of automation. Lawyers come up to me and say, “My job is really easy to automate!” Whereas people involved in healthcare, or raising children, or anything that requires deep human interaction is actually much harder to automate. It does shift our perspective of value. 


MM: But at least in Japan, like you said, that’s the first very thing to be replaced in these eldercare facilities. So does this imply that human touch is less important and even that can be substituted? It’s interesting to see what is perceived as an ‘easy’ task, so lawyers claim, “Oh, whatever I do is just admin,” whereas taking care of others seems to be much closer to service, like a waitress, which means if I give you food, your pills…


KMD: And furthermore, the idea of care as transformed into service does seem to import a lot of the worldview that’s producing AI. Even if they don’t use any AI, Silicon Valley startups that are just building apps are constantly automating food, or cleaning, or whatever is the next thing we can automate for wealthy people to be slightly less inconvenienced. These assumptions about what care actually is, or what food actually is have all been imported into the picture before AI even happens, largely because of where the work is being done geographically, culturally and historically.


The fear of a robot uprising could just be a sublimated fear of the oppressed in society overthrowing you. Whereas from the point of view of the oppressed, there could potentially be some identification or solidarity with the automating forces through a different historical, cultural lens. 

MM: It matters who feeds the machine, and how. 


KMD: And also, what the machine is able to measure. Those are the senses of the automated system. We could ask, “Well, did the person get food delivered to their door — okay, then they’re good,” but we can’t yet have a measurement for whether they had a holistic experience in eating a meal with another person. The ability to measure or quantify those things doesn’t even exist, so how do we properly automate that type of care? The limit of what these tech platforms can offer is revealed when anything that cannot be quantified doesn’t exist. 


BT: I guess if you see AI as a tool, these limits will always exist. But you can also see how it’s so much more amorphous than that. I just read Ian Cheng’s book, Emissaries Guide to Worlding, and he talks about how we are a generation of infinite games. So in his dealing with AI, it’s not about pushing forward a discourse in a single direction, but about creating a game with no set, fixed conditions.


KMD: Which doesn’t really tend to be the case with a lot of the classic examples! There’s AlphaGo, where the goal is to literally win a game [4]. It’s great to think about how an artist sees it as an opportunity to create infinite games, whereas most engineers are trying to optimise these AI systems to win one very specific game.


BT: They create closed problems.


MM: Scientists seem more vested in trying to prove a certain point, whereas artists are more interested and rewarded by an open-ended process. Art is already willing and has no problem with failure. 


KMD: It’s almost as if, at the point that an artist becomes so successful at solving one particular problem, that it becomes boring and uninteresting. They’re almost compelled to continue creating problems or to stay in this sort of infinite game, because that’s what makes their work continue… It’s totally the opposite of engineering where you want to be able to produce reliable results. We wouldn’t want artists who fail frequently to design bridges! [Laughs]


CJC: So Kenric, you’ve been learning from artists. In the previous case of holistic care, or in this case of game-building and worlding, some key qualities that artists tend to be more predisposed towards are empathy and curiosity. I noticed that you frequently use the word ‘import’, which is very much Silicon Valley-speak. What will it take to import these qualities — whether it’s relational connections or open-ended approaches — into the discourse and design thinking of Silicon Valley?


The art world is a really great example of a place where — somehow, despite all of its faults — there is an economic incentive to be curious. How would technology companies get there?

KMD: To be a little cynical — and I’m going to use another Silicon Valley word, ‘incentivisation’ — but we will only have those values in AI when there’s incentive for companies to have them. With my experience working inside of a large tech company, metrics and measurement of impact or success at accomplishing a certain goal is how things work. So what would it take to incentivise companies to build curiosity into their tool making? There’s a larger economic context that really pushes against that. However, the art world is a really great example of a place where — somehow, despite all of its faults — there is an economic incentive to be curious. How would technology companies get there? How would social infrastructure built on technology get there? I’m not quite sure, but I know it won’t happen unless there’s an incentive.


. . . 


"For looking at the highlights of human development, you have to look at the evolution of the organism, and then at the development of its interaction with the environment. Evolution of the organism will begin with the evolution of life, proceed through the hominid, coming to the evolution of mankind — Neanderthal, Cro-Magnon man. Now interestingly, what you are looking at here are three strings: Biological, Anthropological — development of cities, cultures — and Cultural, which is human expression. Now, what you’ve seen here is the evolution of populations, not so much the evolution of individuals . And in addition, if you look at the time scales that's involved here, two billion years for life, six million years for the hominid, 100,000 years for mankind as we know it — you begin to see the telescoping nature of the evolutionary paradigm. And then, when you get to agriculture, when you get to scientific revolution and Industrial Revolution, you’re looking at 10,000 years, 400 years, 150 years. You’ve seen a further telescoping of this evolutionary time. What that means is as we go through the new evolution, it’s going to telescope to the point we should be able to see it manifest itself within our lifetime, within a generation.


The new evolution stems from information, and it stems from two types of information: Digital and Analog. Digital is artificial intelligence. The Analog results from molecular biology, the cloning of the organism, and you knit the two together with neurobiology. Before, under the old evolutionary paradigm, one would die, and the other would grow and dominate; but under the new paradigm, they would exist as a mutually supportive, non-competitive grouping, independent from the external. And what is interesting here is that evolution now becomes an individually-centered process emanating from the needs and the desires of the individual, and not an external process, a passive process where the individual is just at the whim of the collective. So, you produce a neo-human, okay, with a new individuality, and new consciousness.


But that’s only the beginning of the evolutionary cycle because as the next cycle proceeds, the input is now this new intelligence. As intelligence piles on intelligence, as ability piles on ability, the speed changes — until what? Until you reach a crescendo! In a way, it could be imagined as an almost instantaneous fulfilment of human — human and neo-human potential. It could be something totally different. It could be the amplification of the individual, the multiplication of individual existences, parallel existences, now, with the individual no longer restricted by time and space. And the manifestations of this neo-human-type evolution, manifestations could be dramatically counter-intuitive. That’s the interesting part! The old evolution is cold, it's sterile, it's efficient, and its manifestations are those of social adaptation. You’re talking about parasitism, dominance, morality, war, predation. These will be subject to de-emphasis. These will be subject to de-evolution. The new evolutionary paradigm would give us the human traits of truth, of loyalty, of justice, of freedom. These would the manifestations of the new evolution. And that is what we would hope to see from this. That’d be nice."


— Eamonn Healy, Waking Life [5]


  • 1.

    The result of this gestation is "The Posthuman City: Climates. Habitats. Environments", an exhibition currently at the NTU CCA until 15 March 2020. SO-FAR published an a review of Masahiro Mori's "Buddha in the Robot" with installation view images from this show: https://www.so-far.online/weekly/i-robomancer-japan-buddhism-and-artificial-intelligence/ For more information: http://ntu.ccasingapore.org/exhibitions/climates-habitats-environment/

  • 2.

    In the first chapter of this Issue, artist-technologist Andreas Schlegel provided context for the historical significance of Deep Dream to the development of machine learning: https://www.so-far.online/explaining-deep-learning-to-my-dad/

  • 3.

    These exact examples have also been discussed on so-far during a dialogue between new media collective INTER—MISSION and futurist Noah Raford: https://www.so-far.online/is-pokemon-go-the-future-of-governance/

  • 4.

    AlphaGo is a computer programme that plays the tactical board game Go, and has beaten human professional champions. The AI company that developed the programme, DeepMind, was acquired by Google in 2014.

  • 5.

    Waking Life. Directed by Richard Linklater. (2001, Thousand Words) DVD.

Artists and Contributors

Brandon Tay

Brandon Tay

Brandon Tay is a media artist working with the moving image in the fields of projection mapping, audiovisual performance, and mediated sculpture. His work has been shown in the National Museum Singapore, the Singapore Art Museum, National Gallery Singapore, and other museums and galleries in the region. He lives and works in Singapore.

Christina J. Chua portrait picture

Christina J. Chua

Christina J. Chua is Co-Founder and Chief Editor of SO-FAR, a hybrid publication, gallery and artist incubator. Prior to founding SO-FAR, she worked at galleries and art fairs throughout Asia representing and exhibiting a spectrum of emerging to blue-chip contemporary artists from around the world. As a writer, she contributed to various international and Singapore art publications. Today, Christina is committed to bridge-building in the Singapore art scene, while developing a new generation of art patrons through her fine art consultancy and education group, Metis Art. With her interests lying at the interstices of business, technology and contemporary art, Christina is also Strategic Advisor of innovation consultancy ArtBizTech.

K Allado-McDowell

K Allado-McDowell

K Allado-McDowell is a writer, speaker, and musician. They are the author, with GPT-3, of the book Pharmako-AI, and are co-editor, with Ben Vickers, of The Atlas of Anomalous AI. They record and release music under the name Qenric.

Allado-McDowell established the Artists + Machine Intelligence program at Google AI. They are a conference speaker, educator and consultant to think-tanks and institutions seeking to align their work with deeper traditions of human understanding.

Magdalena Magiera

Magdalena Magiera

Magdalena Magiera is Curator, Outreach & Education at NTU CCA Singapore. She was an independent curator, Managing Editor of frieze d/e, and currently editor of mono.kultur, a quarterly interview magazine. She co-curated Based in Berlin (2011) as well as exhibitions for The Building and SPLACE in Berlin. Magiera was Project Manager of The Maybe Education and Other Programs at dOCUMENTA(13), Kassel (2012) and UNITEDNATIONSPLAZA, Berlin (2006–08). Prior to joining NTU CCA Singapore, she worked for e-flux exhibitions and public programmes in New York City.