SO-FAR

Skip to main content

Researching Alice, Bob & Eve, Part 1

Artist Teow Yue Han questions roboticist Dr. Marcelo Ang on AI’s applications for emotional care, as research for a performance on movement and technology.

Mobile robot from the lab

When one thinks about human encounters with artificial intelligence (AI), the humanoid robot comes to mind, and with it a host of hopes and anxieties surface: can AI ever embody and overwrite both the human mind and intuition? Can we teach AI to help ourselves for good? Can AI machines establish real human connections? In August 2018, I collaborated with contemporary dance company RAW Moves Studio to launch a "Movement Research Living Laboratory" under the title Alice, Bob & Eve . Its aim was to investigate the relationship between movement and new technologies: how the digital world choreographs our movements in our daily lives, regardless of our awareness of it, and to find ways to artistically and critically reflect on these issues. 


We went out in search of applications to contextualise our exploration in the real world. It was not difficult to find applications where personality-driven humanoids and AI have already been developed to cater to human needs and wants. In particular, we noticed that most of the examples that reached mainstream attention centred around the provision of emotional care and labour by the AI robot or humanoid. Was this a sign of how we in our communities and societies have been so lacking in emotional care and support, that we have become so unavailable for each other, or have not been good communicators of our emotional needs? Are we turning away from what seems to be the disappointing and inconsistent fellow human, and instead now reaching for AI as a more consistent and reliable substitute?   


Are we turning away from what seems to be the disappointing and inconsistent fellow human, and instead now reaching for AI as a more consistent and reliable substitute?   

One of the applications we were drawn to was Akihido Kondo’s public marriage to a hologram of Hatsune Miku, which stirred international interest and debate [1]. Parallels were drawn to Spike Jonze’s 2013 film Her where the protagonist similarly falls in love with a personal operating system. We also discovered the Youper app which is a “first-responder for emotional health problems for people all over the world and to ensure that no one has to wait years to address their issues” [2]. Its intuitive and immediate series of responses were developed by a collaboration between a psychiatrist, a software engineer, and a designer. As our team explored its UX (user experience), some felt that they remained disengaged in spite of the “live chat” feel, as they felt the automatic responses could not make up for the need for human connection and care, whereas others felt the methodical process was very informed and that this one technology developed could benefit many. In Singapore, we also learned of a Robocoach developed by the Electrical Engineering department of Ngee Ann Polytechnic which was designed to lead seniors in exercise classes and to provide encouragement [3]


Dr. Marcelo Ang demonstrates the various equipment in his laboratory

Dr. Marcelo Ang demonstrates the various equipment in his laboratory.


In the months leading up to our final presentation of the findings of the Movement Laboratory to the public, SO-FAR supplemented our research with an opportunity to meet with Dr. Marcelo Ang from the Advanced Robotics Centre at the National University of Singapore. I was joined by another frequent collaborator and artist Urich Lau, to ask Dr. Ang about the development and aspirations of roboticists along this line of AI and emotional labour. 


Teow Yue Han: ’m interested in understanding the movement of the robotic arms, and the process of calibrating them.


Marcelo Ang: This robotic arm is perhaps the most advanced in the world because there is a force sensor in each of each joints, allowing the robot to have a sense of feeling. Most robots that are position control devices don’t have that. Imagine moving your arm without feeling your arm — this is how most robots are designed to move. But this robot can feel its own joints, and it is the first truly collaborative robot meant for human environments. Let’s say it’s moving and it hits something. It won’t hurt you because it feels that it has touched you. You can programme it to be soft or stiff, or behave like it’s underwater, like when you’re swimming underwater and you cannot move very fast. 


This is very important in the field of robotics. Robots are now working well in factory automation. Factories are what we call “structured environments”, where the workplaces are exactly where they should be. The robots are supposed to pick something up and put something down, and the robots know where it is, so they don’t need to look — they can just operate. That’s why they call it a “lights-out factory”. They can do things very fast because everything is precise. And how the robot knows that is because somebody has taught the robot by moving it one joint at a time, and it remembers the positions accurately. 


TYH: Is that the most efficient way?


MA: Well, it works well so far. 


The bigger impact is taking this robot out of the factory and into everyday life. That means into my home, into the shopping mall, the airport, or outdoor parks — anywhere except the factory. The difference is that these environments are “unstructured”, which means that the position of things are not known. So that is the challenge — how to make the robot work in unstructured environments that are designed for the movement of human beings. Who is the shopping mall or your home designed for? People. Factories are not designed for people — they are designed for machines, and people are separated. We want to make the robot adapt to an environment designed for human beings, so that we won’t need to modify the environment so that the machine can work. 


A mobile robot being developed in the lab

A mobile robot being developed in the lab.


A good example is a mobile robot, like a vehicle that moves around. In the early days, for these robots to move, you needed to have tape on the floor to guide them. But today, you don’t need that. 


Another good example is a self-driving vehicle. As the car moves into active traffic, we want the car to drive better. Most of the research is to make robots sense and understand the environment first, then know what to do from that understanding, plan, make decisions, then execute them, and finally learn and improve. Robots pick up experiences like how humans learn from experience. But these four things — sense, plan, act, learn — how do we do that? That’s the challenge of artificial intelligence. 


What’s also very important to achieve is hardware that is meant to be soft, if you will, and human-friendly. A typical industrial robot looks intimidating, right? And it’s very unsafe, because it doesn’t care. It doesn’t have force sensors and it doesn’t need to. So we need a new generation of robots called “soft robotics”, where the materials are sponges. Remember that children’s movie, Big Hero 6? Baymax is a balloon creature that looks like a robot. Imagine this robot, with limbs that are all made of soft balloons. It can move and help you, and it can feel things. That’s the hardware — the body. Then you have the software and AI, which is the control centre and communicative capabilities for the body. The body needs to be soft, because you cannot sense, plan or execute with 100% accuracy, so you must be able to tolerate mistakes, and the softness should be enough so that you don’t hurt people. 


Industrial robot KUKA

A typical industrial robot by German manufacturer KUKA is adapted to perform minute tasks by being fitted with a "macro-mini framework".


Going onto the process of developing the machines, we try different things, like this “macro-mini framework”. A big robot is very rough and cannot exert small forces, so it has to be very gentle. It’s almost impossible for it to do some minute tasks that require a lot of precision. So what we do is we buy a big robot, build a small, sensitive robotic hand, and combine the parts. 


I also know another German startup called Franka Emika that sells another robot called the PANDA [4]. It’s very advanced and has force sensing. 


TYH: Is this considered haptics? 


MA: Not exactly. Haptics is more about a sense of feeling. This is about force control, and feeling the pressure. 


But in terms of haptic devices, we created a robotic joystick that has force feedback. When you’re moving it, you can feel the cushion. If it touches a brick wall, it feels the brick wall. And the sensor is programmable, so it’s very good for training. Let’s say you want to lift up a bottle. Instead of just looking at how it moves, you can feel how to do the task and connect this to the robot. When I lift it, I feel the weight of the bottle: 1 kilogram. Or you can scale it to make it feel 100 grams, because it’s all computed. It’s almost the same concept as tele-robotics. There is a robot for surgery called da Vinci [5]. It’s minimally invasive, and is connected to a feed where the surgeon is operating two joysticks much like this, and just looks at the monitor. 


Ulrich Lau: I’ve read that a surgeon can even be overseas. 


MA: They are testing this because of the delay in response. And now with 5G coming, that wouldn’t be a problem. But even if you lose communications for a split second, the robot’s still stable, so it just waits. It doesn’t do anything wrong or go crazy. That’s why anytime that you have a human-robot system, the robot must have some intelligence so that the human doesn’t always need to talk to the robot. 


The da Vinci system is already quite old, and if I tell you the truth, you will be really worried about it. Imagine a doctor is operating just by looking at the screen, cutting the heart or some vital organ, but he doesn’t feel anything. There is no force feedback — he can only see the screen. But to take robots from the factory to everyday life, you need force feeling. Humans are not 100% accurate, and yet we can do many things quite accurately because we can have feeling. So they are working on developing that. 


There are many applications in the field of healthcare. We’re also collaborating with a Japanese start-up called WHILL, to make an intelligent wheelchair with a camera and range sensors in it [6]. Going back to the idea of the “ultimate slave”, we need mobility. You have to be able to tell it, “Go to the bathroom,” and it’s safe because it’s the computer driving it — not a human. But I also wanted to design it in kind of a “cool” way like a Personal Mobility Device (PMD), so that when people see an elder using it, they won’t pity them. 


In modern robotics labs like this one, we are working on how to make the robot useful for everyday life, so that it can be your domestic worker, to cook and clean up for you. It can be the ultimate slave, for lack of a better term. It also becomes your social companion, or even like your wife if you were to live alone.


WHILL Intelligent Personal Electronic Vehicle prototype

A prototype of a WHILL Intelligent Personal Electronic Vehicle.


UL: There are now robots meant for old folks’ homes for —


MA: — Companionship, yes. They’re not only to help you physically, but just to talk. We are already using the same technology with Siri or Alexa, asking them, “What’s showing on TV now?” But those devices are all just data. The ultimate achievement would be a robot that’s capable of moving, with an arm to do physical tasks, with a level of intelligence to be your interactive companion. Do you remember the cartoon Teletubbies ? Imagine if that’s a model companion for your elderly mother or father. Instead of using a phone to call people, you use a Teletubby. You could tell it, “Hey, call my son Mark.” And Mark’s picture appears through a video call. And because it’s a robot with haptics, your son not only talks to you, he can gesture to you too through the robot. You could even receive a remote kiss. And it’s not only for older people. Say if you want a nanny for your kids? This robot can be your babysitter. It’s a big goal, but all our projects are leading to this ultimate thing. 


We are working on how to make the robot useful for everyday life, so that it can be your domestic worker... or the ultimate slave, for lack of a better term.

UL: Like The Jetsons ! Or like the Sony AIBO dog [7]


MA: There’s also a robot from Japan called PARO. It’s a robotic seal, like a cute stuffed toy for —


UL: For old folks’ homes. 


MA: Yes, and what I really like about it is not only its intelligence — it’s also cute! It reacts to how to pinch it, do things to it, and it may be random too. You charge it like a baby’s bottle. Takanori Shibata of the Japanese research agency AIST (National Institute of Advanced Industrial Science and Technology) built this [8]. What’s interesting is how he conducted studies in hospitals in Japan, amongst people in ICU who are not allowed visitors and become lonely. In the study, they had one control group with PARO, and another group without. After a few months, the recovery of those with PARO was statistically, significantly higher. It makes sense, right? We underestimate the power of the mind to heal the body.


And of course, the highest aspiration is to reach cognitive capabilities. The problem is that the things humans do easily, robots are not yet ready to do. For example, picking up this object — you just do it right? But then, if you want to teach a robot how to do that, how would you do it? You cannot just put it into steps. That’s my definition of intelligence: the ability to do something intuitively, without needing to break it down into step one, step two. Because the latter is no longer intelligence, it’s just following instructions. 


My interest in AI is to teach robots to do things that humans do so easily. Because, ironically, those things are very hard for robots. And how humans do it is through learning — so I ask, how can robots learn? I’m also very interested in building an artificial world by simulation. If I can create a simulated environment that is “the world”, then that world can be used to train the robot. How long does it take to learn how to drive a car? Maybe one or two years. To be a professional driver, maybe five to ten years. But we cannot wait that long, right? So if you have a simulated environment in the world, and I place the driverless car there, I can run the simulator ten times, fast-forwarded. So ten years becomes one year, or one year becomes one minute. The key is how to build a world that is as realistic as ours. 


That’s my definition of intelligence: the ability to do something intuitively, without needing to break it down into step one, step two. Because the latter is no longer intelligence, it’s just following instructions. 

The introduction to this dialogue was adapted from the “Research Guide for Laboratory Participants” that accompanied RAW Moves and Teow Yue Han’s performance Alice, Bob & Eve. The research guide was written and edited by Nah Dominic, with key concepts from Teow Yue Han. The final work was presented on 22-23 February 2019 at the School of the Arts (SOTA) Gallery. Photographs by Urich Lau.


  • 1.

    Read more about this curious phenomenon here: https://www.straitstimes.com/asia/east-asia/ive-been-thinking-about-her-every-day-japanese-man-marries-hologram

  • 2.

    The app can be accessed here: https://www.youper.ai/

  • 3.

    Read more this elderly programme here: https://www.ihis.com.sg/Latest_News/News_Article/Pages/Singapore_turns_to_robots_to_get_seniors_moving.aspx

  • 4.

    Franka Emika has developed a soft-robot for complex automation: https://www.franka.de/technology

  • 5.

    The da Vinci Surgical System was developed by a company called Intuitive Surgical, and approved by the Food and Drug Administration in 2000. It has since completed hundreds of thousands of surgeries worldwide, especially for hysterectomies, prostate removals and cardiac valve repair.

  • 6.

    Have a look at Whill's Intelligent Personal Electronic Vehicles here: https://whill.us

  • 7.

    Sony produced the AIBO series of robotic pets from 1999 to 2006, then discontinued them. A new generation of AIBO was recently reintroduced in 2018.

  • 8.

    Designed by scientist and roboticist Dr. Takanori Shibata in 1993, the PARO is a robotic baby harp seal meant to provide calming therapy for hospital patients, especially those with dementia and autism.

Artists and Contributors

Teow Yue Han portrait picture

Teow Yue Han

Born in 1987, Singapore, Teow Yue Han received a BFA in Digital Filmmaking at the School of Art, Design and Media, Nanyang Technological University (2012). He later pursued an MA in Fine Art Media at Slade School of Fine Art, University College London (2016), where he was a recipient of the 2016 Julian Sullivan Award. Teow Yue Han’s works explore the interface between video, performance art and technology. He is interested in the way new technologies such as smart cities or artificial intelligence are shaping society, culture and the urban landscape. He creates situations where gestures and social interactions that are informed by these technologies can be interrogated, rehearsed and renewed. Teow is a core member of INTER—MISSION, an art collective focusing on art and technology. He lives and works in Singapore.

Nah Dominic

Nah Dominic

Nah Dominic is a Singaporean educator, researcher, dramaturg and writer with an interest in literature education at both tertiary and secondary levels. He is currently Associate Faculty at Singapore University of Social Sciences teaching undergraduate courses in English Literature and Research Assistant at National Institute of Education on a project concerning cosmopolitan approaches to literature education in secondary schools. As a freelance dramaturg, he also works on both theatre and dance projects.

Urich Lau

Urich Lau

Born in 1975 in Malaysia, Urich Lau received a BFA from the University of Tasmania, Australia (2001) and an MFA from the Royal Melbourne Institute of Technology (2004). Urich Lau’s works engage with the history of art and culture, and the intents of contemporary art discourse, while speculating on future manifestations of technology. He works with video, photography, sound, performance, and mixed media installations, creating contextual irony out of his interventions and interruptions. Lau is a founding member of INTER—MISSION, an art collective focusing on art and technology, a member of The Artists Village, and Instinctive (INSTINC Art Space). He is also an art educator at LASALLE College of the Arts and an independent curator. He lives and works in Singapore.

Dr. Marcelo H. Ang, Jr.

Marcelo H. Ang, Jr.

Dr. Marcelo H. Ang, Jr. received the B.Sc. degrees (Cum Laude) in Mechanical Engineering and Industrial Management Engineering from the De La Salle University, Manila (1981); the M.Sc. degree in Mechanical Engineering from the University of Hawaii (1985); and the M.Sc. and Ph.D. degrees in Electrical Engineering from the University of Rochester, New York (1986, 1988). His work experience includes heading the Technical Training Division at Intel, the Philippines; research positions at the East West Center in Hawaii and at the Massachusetts Institute of Technology; and a faculty position in Electrical Engineering at the University of Rochester. In 1989, Dr. Ang joined the Department of Mechanical Engineering of the National University of Singapore. He also is the Acting Director of the Advanced Robotics Centre. His research interests span the areas of robotics, mechatronics, and applications of intelligent systems methodologies. He teaches in robotics; creativity and innovation, applied electronics and instrumentation; advanced computing; product design and realisation. In addition to academic activities, he is actively involved in the Singapore Robotic Games as its founding chairman and the World Robot Olympiad as a member of the Advisory Council.