SO-FAR

Skip to main content

Envisioning Reality with MeshMinds: The SparkAR Tutorial

Artist Weixin Quek Chong learns how to create face filters with creative studio MeshMinds, and reflects on the imaginative potential for self-visioning in augmented reality.

Water Bodies

MightYellow, Water Bodies, 2019

“Self is. Self is body and bodily perception. Self is thought, memory, belief. Self creates. Self destroys. Self learns, discovers, becomes. Self shapes. Self adapts. Self invents its own reasons for being. To shape God, shape Self.”

Octavia Butler, Parable of the Talents [01]

Earlier this year, SO-FAR interviewed Howie Kim , a young Singaporean artist who uses social media both as a means of distribution, and as reflexive fodder for his own artworks. After the interview, as a natural progression from his already therianthropic, surreal digital self-portraiture, Howie learned the wizardry of face filters. His creations are wholly in line (follow @howie759 to gain access to his suite) with an emergent trend of glossy, cyborg-esque augmented selves populating the edges of Instagram [02]


Of course, augmented reality (AR) is built on the back of computer vision. Its specific domain is called image processing, and the machines (and the corporations that own them) are learning to analyse the landscapes of our faces at breakneck speeds [03], the more we cave in to our selfie addiction. They slip AI black-magic [04] into our phones, we consent — we yield , so that our tools are perfected to further reconstitute and recreate our selves. Glossy, “the self adapts”, as Octavia Butler writes. 


Can this ego-entertainment be turned on its head “for good”? Creative technology studio, MeshMinds , provokes just that question, directing impactful AR campaigns to promote the Sustainable Development Goals (SDGs) with their partner, the United Nations Environment Programme (UNEP). The studio works with visual artists through its eponymous not-for-profit foundation, teaching them the latest technology, such as AR and virtual reality (VR), to bring these campaigns to life. SO-FAR brought artist Weixin Quek Chong into the MeshMinds studio, at PIXEL where they are incubated by IMDA (Infocomm Media Development Authority), for a simple tutorial on Facebook’s Spark AR platform with their Chief Creative Technologist, Haikel Yusuff. Weixin’s poignant reflections on this afternoon conclude the text.


Can this ego-entertainment be turned on its head “for good”?


HY:   MeshMinds is a creative technology studio that uses AR and VR in integrated campaigns to promote action towards the SDGs, in partnership with UNEP and with the support of some of the world’s most environmentally responsible companies. We focus on goals 11 to 15 because we feel they are most relevant to us in Singapore – for example, sustainable cities, communities, responsible consumption and climate action — things that we can see and address right now. 


WQC:   Why AR and VR?


HY: Instead of shouting out “beat air pollution” on a pamphlet, we use face filters to educate. You could send the same message on a website or microsite, but it would be a linear path of information. We do a lot of reading and listening everyday learning about new things, and it may sometimes seem very mundane to for us to retain certain information through those forms of mediums. This medium however is a new, fun way to highlight the different problems. We try to make things immersive, so people get to explore on their own. 


We recently did a project for the British Council to promote shoes that were made using recycled materials by a company that was a beneficiary of one of their creative hubs in the Philippines. Historically, the British Council printed pamphlets and played videos but they had no way of tracking audience engagement. Together, we created an AR experience operated from an image marker that could be reproduced anywhere and at any scale and would even be accessible through a screen. An interactive menu would pop up on your smartphone and you could cycle through five buttons, each button sharing different parts of  the creative process of making the shoes. We were able to include words, images and even simple animations to bring the creative process to life.


An AR experience created by MeshMinds for a sustainable shoe brand.

An AR experience created by MeshMinds for a sustainable shoe brand.


People use Spark AR because there’s no additional app download if you already have Facebook installed. There are multiple features of Spark AR. You have the typical image-based target tracking which can be as small as a business card, or as big as a giant banner. The biggest AR feature MeshMinds has ever done was on a four by two-metre panel. But face-tracking is the one that you’re the most interested in, right?


WQC:   Yes, that’s the one that everyone uses.


HY: With face-tracking, instead of an image, it’s your face. And Facebook’s engine for face-tracking is really smooth to use as compared to the other native apps, or even WebAR, because it tracks up to five faces. If I’m not wrong, these template faces are from Facebook’s Spark AR team. They probably have to voluntarily do this, which is quite funny.


WQC:   It’s to show a diverse range of face shapes, right? But it really makes sense — Facebook has been preparing this for years, collecting all our facial data.


HY:   That’s why I don’t use Facebook. I only use it for this — for development. 


WQC:   For me, it’s too late. [Laughs]


HY: Anyway, every person has very different facial features, the diversity allows you to test out your layers, so it will work for everyone’s faces. 


WQC:   For Spark AR, would you need a minimum level of OS to run?


HY:   I’ve seen someone use it with a MacBook Air. It’s not so intensive unless you get really complex with your filters. Then, you might need a stronger machine. It gets complex the more interactions you have. I started using this a year back and it was at version 33. Now it’s on version 67, and they’re always releasing new features. It’s actually officially still in beta!


The process can be broken down into three phases — develop, debug and deploy. For development, we’re using Spark AR Studio. Then, there’s an app called Spark AR Player that mimics how the AR effect is going to be when it’s getting deployed on Facebook or Instagram. It’s basically a debugging tool to see how your effect works. Finally, Spark AR Hub is where you deploy your effects. It’s on Facebook after all, so you will get statistics for every effect — the number of people who access, capture and share it.


You can use the resources on their new website. If you go to the ‘Learning’ tab, they have everything dissected in accordance to what you want to do. And one important Facebook group is called Spark AR Community. It’s a forum where people post and showcase their stuff, as well as ask about problems and bugs, or provide free patches and tutorials. There’s fun stuff and all kinds of prototypes.


A face filter created by MeshMinds to spread awareness on forests and biodiversity.

A face filter created by MeshMinds to spread awareness on forests and biodiversity.


WQC: Have you seen some highly technical stuff that others have done?


HY:   Yes, there’s this Japanese user who has contributed over five videos, just for debugging purposes. He did volumetric scans of himself, then made them into 3D objects and animated them in AR, which is crazy because no one would have predicted the usability with the technology to that extent. It’s a goldmine!


WQC:   It looks like Facebook is investing so much in feeding, nurturing and developing this huge community. Do you think the appeal is largely entertainment, or do you think there’s are other draws?


HY:   It really depends. Facebook and Instagram are two different worlds. Instagram’s design thinking is more on sharing. Whoever gets access to the beta programme on Instagram would see that it’s about personal moments rather than campaigns. They have a lot more constraints in the way you develop your AR — you can’t have text or brands. If you come across Kylie Jenner’s makeup AR effect, you’d realise there’s no logos — just makeup. The AR on Instagram is not ad-based, as compared to Facebook, where you can market your product. 


It looks like Facebook is investing so much in feeding, nurturing and developing this huge community.


WQC:   What’s the process of getting the face filter onto Instagram?


HY:   On Facebook, it’s open to everybody. But to gain access to Instagram, you have to submit a direct request. If you are a creator or a brand, you’ll gain access. The reason why Facebook does this is because they want to get as many bugs out of the way before opening it to everyone.


Okay, so let’s get right into it. This is essentially the development programme. If you’re familiar with Adobe Illustrator, it will be fairly easy to use. This video is mimicking your phone’s front and back camera views. Typically for 3D environments, you will need a light source, an environment and your assets, which would be an image. If you pan around, you can see how the 3D environment looks, but it’s mimicking what you see on your phone, which is technically just a flat surface. 


There are two ways to place assets on the face — either you map it on the face itself, or you track it positionally according to the face. If you want to do a face mask, just right-click, and insert. Before you do anything, you need a face tracker. 


WQC:   You just lock down a few target points like eyes, ears, nose and mouth?


HY:   Yes. The face mesh that you saw before this is an OBJ, a 3D object. But Facebook makes it convenient enough to create the facial points for you too, if you have a flat image like a PNG or JPEG. They even show you where the points are according to the grids so you can map it on the face.


Then there’s also the manipulation of the face, where you can easily pull different facial feature points. You can even bring in your own 3D models to manipulate them — it’s quite fun. There is also plane tracking, so instead of an image, it detects flat surfaces and you can place objects within the field. 


WQC: What else can you add? 


HY:   There are various render options. A shader-type would be affected by light in the environment. You can tweak how metallic it is. Then you have your face paint, which is transparent. This retouching one is the skin-smoothening function, where you have a slight foundation for the face, like Meitu…


WQC:   — Meitu XiuXiu [05]! That’s quite freaky! It has this subtle, airbrushed effect.


HY:   It’s very easy to include interaction. You can open your mouth, raise your eyebrows or even smile, and your animations will get triggered. They are very intuitive commands. Every time you open a face filter, you would open your mouth or smile right away. But if you do complex interactions, like a wink, then you would need to add instructions, which sometimes isn’t that fun anymore. The open-mouth ones are usually very easy for anyone to enjoy.


One of Haikel Yusuff’s interactive creations.

One of Haikel Yusuff’s interactive creations.


A new feature is that you can track hands, at least technically. But the downside is that you can’t really detect hands in 3D.


WQC:   Ah, I see. So it still goes by the vague 2D outline of hand shapes.


HY:   I’ve seen someone creating musical instruments based on the position of your hands, playing different notes. 


You would think this is all coded, but it’s not. It’s a node-based system, where Facebook has already written small chunks of code for you in the face trackers, and you can just plug and play everything. That’s why even someone who doesn’t know code can get very complex interactions — the possibilities are endless. But, if you do have a Javascript or coding background, you can create games out of your face filters.


WQC:   This is like a toolbox that is growing because people are just adjusting each other’s scripts and making them more complex over time.


Has anyone tried the face tracker on non-human facial structures?


HY:   This guy actually managed to track his cat’s face on it. It’s still using the face filter, but someone applied it on a cat!


WQC:   What’s the current possibilities of expanding the identification of the face tracker? If it can just pinpoint eyes, breathing holes and mouth, and could adapt to other kinds of facial forms.


DPLMT, The Mount That Keeps on Growing, 2019

DPLMT, The Mount That Keeps on Growing, 2019


HY: I think it’s because the facial features of a cat are very close to a human’s, so it’s still detectable — but maybe not all cats. 


WQC: Maybe it’s just a matter of time.


HY:   There’s another thing that could work instead of the face tracker. It’s the image tracker. If you have ten cats that look the same, you can tag an image of the cat and track the layers on the image of the cat.


WQC:   Do you see AR as kind of a middle ground between VR and actual physical reality?


HY:   Not really. I would say VR is still just as immersive. VR never took off because the headset is bulky, so there is a physical restriction to the experience, even if it isn’t as expensive now. If you are a consumer and you have it at home, you’d need a powerful machine. If you are a company, you’d still need to bring it everywhere, which can be an inconvenience. The new Oculus headsets are light, but they’re still expensive. 


WQC:   Very soon, I’m sure these design issues will be refined.


HY:   In terms of hardware, AR is taking off very quickly because everyone already has a smartphone. There are several companies that have invented AR glasses too. Accessing all the AR components is just as easy as going onto Facebook, and it is the next frontier of computation. This is just another way to receive communication, information, whether in the form of selfies or immersive experiences.


WQC:   The next step in our cyborg evolution.


Weixin’s Comments: 


I really enjoyed Haikel Yusuff’s playful introduction to this new medium, AR, as well as the optimism with which he approached it as an instrument to capture the elusive attention of the contemporary user/viewer/consumer. It is a medium that has proliferated and quickly been integrated into contemporary daily life — one that I’ve observed with fascination as something which reveals the universality of the impulse to look at oneself, as if repeatedly mirror-testing one’s human capacity for self-awareness. Even for that segment of society who stringently avoid this self-imaging, their aversion defines this commonplace desire through its opposites. 


Since its popularisation and riding on the mass integration of platforms like Instagram and other online means to self-publish one’s likeness and individually-created photo and video content to the world, AR filters have become another tool in manipulating the way we see and portray ourselves in captured versions of reality. On one hand, this follows in the historical legacy of self-portraiture as a genre — a tool of fantasy, disguise, transcendence, satire, reduction… expressions like any other more physical “forerunner” technologies such as makeup, painting, drawing, sculpting… On the other hand, this adjusted perception — or imposition on our perception — also becomes somewhat of a moulding force, shaping the way we imagine and see ourselves and others, while sharing and evolving new visual identities and ideals. The rise of “snapchat dysmorphia” accusations in the media [06] links the use of filters to an increase in referencing “filtered” versions of the self in plastic surgery requests, yet they also merely open another arena of enquiry into what the fundamental influences on changing ideals of beauty are in the first place. 


This adjusted perception — or imposition on our perception — also becomes somewhat of a moulding force, shaping the way we imagine and see ourselves and others, while sharing and evolving new visual identities and ideals.


Humans have always built upon — and referenced — themselves, creating a never-ending mythology about what they are or want to be… The bare reality that one is a consciousness, inhabiting a particular body, its being and visuality can precipitate a jarring shock. At its very heart, realising this limitation can be both frightening and frustrating without myth and narrative — indeed narratives wrapping themselves over the bare bones of one looking into existential horror vacuis


We often crave a means of escape while living under seemingly immovable societal structures that pre-frame individual existences. Perhaps the ability to create these augmented, alter-reality images of ourselves and others in real-time is not only entertainment, but relief. They offer an escape route for the imagination (even if it is via others, i.e. the filter creators whose visions we enact on our own faces) to be spliced into one’s minute moments. Staring at images of ourselves can be something so very close to puncturing the water-surface-skin of our own awareness, and yet, as a form of entertainment and self(ves)-creation so many of us now casually tread that fine line for hours, regularly creating small stories of Selves within the larger structured stories of these platforms’ corporate and commercial interests. 


That said, I admire the creativity and energy with which some engage with this medium that is so easily dismissed as frivolous. While AR adds to the arsenal of mundane entertainment and commercial presences that dissipate and distract our precious attention spans, so vehemently battled over in the daily onslaught of our lives as inhabitants of both urban and online worlds, it can also be a reminder of that spark of imagination and self-visioning which exists in every individual — an existential need beyond pragmatism, the force of which carries the potential to sway us, hijack or lockdown our motivations and ambitions. 


As the tools of changing ourselves — physically as well as virtually — evolve, one can only wonder how far the contemporary body already is from what is “natural”. As these unfixed realities continue to layer over each other, coming from and giving rise to facets of our individual consciousness, I find myself looking to the science fiction author and brilliant mind Octavia Butler, who envisioned extreme scenarios of survival, adaptation and blending together of perceptions, abilities and forms.


AR can also be a reminder of that spark of imagination and self-visioning which exists in every individual — an existential need beyond pragmatism, the force of which carries the potential to sway us, hijack or lockdown our motivations and ambitions. 


Where are we going with all this? In the meantime, may we continue to envision ourselves, perhaps with the virtual world boosting — not detracting — our imagination in the physical.


The MeshMinds Collection is a series of limited edition artworks created through the artist incubator, exclusively distributed in partnership with SO-FAR. 40% of the proceeds go back to The Meshminds Foundation to further their activities.


  • 01.

    Butler, Octavia E. “Parable of the Talents” (London, UK: Headline Publishing Group, 2014).

  • 02.

    Follow @johwska, a trend-setter in this arena, according to The Verge: https://www.theverge.com/2019/2/27/18212783/instagram-creator-effect-filters-beta-program

  • 03.

    Get into the nitty-gritty techniques of image processing here: https://medium.com/cracking-the-data-science-interview/snapchats-filters-how-computer-vision-recognizes-your-face-9907d6904b91

  • 04.

    In case you missed it, FaceApp got a media bashing earlier this year for its murky privacy policy, but Snapchat, Flickr and Facebook are just as culpable: https://qz.com/1005879/snapchat-quietly-revealed-how-it-can-put-ai-on-your-phone/, https://futurism.com/face-images-train-ai-consent, https://futurism.com/face-images-train-ai-consent.

  • 05.

    Meitu is a Chinese technology company founded in 2008 that is known for developing beauty apps. Their suite of apps have been activated on over 1.1 billion unique devices worldwide.

  • 06.

    In response to teenagers seeking plastic surgery according to airbrushed aesthetic of their face filters (read https://www.theguardian.com/lifeandstyle/2019/jan/23/faking-it-how-selfie-dysmorphia-is-driving-people-to-seek-surgery), Spark AR recently announced an update on its well-being policy that it would be removing all filters associated with plastic surgery from Instagram. https://www.facebook.com/groups/SparkARcommunity/permalink/733132997098745/

Artists and Contributors

Christina J. Chua portrait picture

Christina J. Chua

Christina J. Chua is Co-Founder and Chief Editor of SO-FAR, a hybrid publication, gallery and artist incubator. Prior to founding SO-FAR, she worked at galleries and art fairs throughout Asia representing and exhibiting a spectrum of emerging to blue-chip contemporary artists from around the world. As a writer, she contributed to various international and Singapore art publications. Today, Christina is committed to bridge-building in the Singapore art scene, while developing a new generation of art patrons through her fine art consultancy and education group, Metis Art. With her interests lying at the interstices of business, technology and contemporary art, Christina is also Strategic Advisor of innovation consultancy ArtBizTech.

Haikel Yusuff

Haikel Yusuff

Graduated from LASALLE College of the Arts, with a BA(Hons) in Design Communication, Haikel oversees the design and strategy of all projects at MeshMinds. With an interest for both arts and technology, he assists with the conceptualisation and development of diverse projects ranging from augmented reality to projection mapping.

Weixin Quek Chong

Weixin Quek Chong

Weixin Quek Chong is a visual artist whose work explores materialities and relationships between the digital, organic and aesthetic. Her recent projects are inspired by processes of transformation in the biodiverse world and cyborg adaptation; incorporating images, sculptural objects, installations and sensorial-based explorations that frequently involve tactility, sound and immersive environments. Weixin graduated with an MA from the Royal College of Art (London), a BA(Hons) from LASALLE College of the Arts (Singapore) and was a recipient of the NAC Overseas Arts Scholarship (Postgraduate) (2012) and the Tan Ean Kiam Postgraduate Scholarship. In 2018 she received the President's Young Talents Grand Prize conferred by the Singapore Art Museum and in 2019 the National Art Council of Singapore's Young Artist Award. Working between Madrid, Singapore and London, she has presented projects in Singapore, London, Seoul, Santiago, Yogyakarta, Taipei and Brussels among other cities and is an alumna of artist residencies in the Museum of Modern & Contemporary Art (Korea), NTU-Center for Contemporary Art (Singapore), Museo de Arte Contemporáneo- Molten Capital in Santiago, Chile as well as co-founder of the collaborative project soft/WALL/studs. 

DPLMT picture

DPLMT

DPLMT is a multidisciplinary art collective formed in 2017 by Raihana (Araikreva), Iqbal (EBAO), Shakir (The Terror Troopers), Kath (Katkucing) and Shamyl (Myl). Started as an avenue for the team to work on mural projects together, DPLMT has now worked with various companies and brands to create experimental visual solutions. Their philosophy is set on infusing their cultural heritage and experiences into their art while exploring new means of expressing their stories such as immersive experiences and augmented reality.

MightYellow picture

MightYellow

Adeline Tan aka MightYellow is an illustrator and visual artist based in Singapore. Taking inspiration from personal experiences, the environment and popular culture, Adeline imagines alternate realities to current situations. She pursued a career in advertising, branding and technology before becoming a full-time illustrator, and has built up a clientele that includes Facebook, Sonos and Uniqlo.