Christie Lau gamifies the everyday by making clothes for the Metaverse
This designer’s graduate collection uses over 10 different software programs to warp familiar garments and environments and place them in a VR/AR setting.
Three giant white boxes plastered with QR codes stole the show as they shuffled down the runway at the recent Central Saint Martins BA Fashion graduate showcase. It was conceived as a relatively last-minute solution, as Christie Lau wasn’t entirely sure how they were going to present their digital fashion collection in a conventional runway setting. Consisting of a Virtual Reality experience with 3 looks in corresponding environments, it’s not exactly like everyone in the room could have been wearing a VR headset. What everyone does have, though, is a phone, allowing the use of Augmented Reality Instagram filters to showcase the looks.
The fashion print student’s collection sits in uncanny valley. The phrase is technically used to describe the uncomfortable feeling brought about by looking at robots or simulations that look almost human, but not quite. But in terms of Lau’s collection the phrase seems applicable, seeing as their digital garments and settings are inspired by “everyday environments, everyday characters”, but manipulated to seem “slightly off”. Reinterpreting real-life mundanity for the Metaverse, these normal scenarios are “reimagined in an absurd way”, giving the feeling of something being “not really quite right”.
Take the environment that merges the London Underground with a supermarket. Stocked shelves exist on a train beside seats printed with Sainsbury’s “Great Prices” logo. Or the all-too-familiar Windows XP background rendered as a 3D environment as if you’ve been trapped in your screen. (“For me, that’s an everyday environment because I’m always on the computer,” said Lau.) The clothes, too, behave in ways they couldn’t in real life, simulated with the gravity setting at zero in 3D fashion design software CLO3D, so that clothes stretch upwards, float and billow out.
The environments were created by virtual production artist, Richard Taylor. Digital fashion encourages collaboration, according to Lau, as executing everything from start to finish would be a mammoth task. For the subway environment, Lau tapped their friend, graphic designer Jessica Sanders, to create comical tube posters. One declaring “Level 3 Hooligan Warning” is a nod to those who can’t stand the obnoxious hollering of drunk football fans on the Tube.
Lau wanted to subvert expectations of what virtual spaces can be: “You have the expectation of it being something fantastical, not this reality.” Instead, their idea was to mix contexts of the physical and the digital, “as we progress towards a more mixed immersive reality.” The idea behind mixing contexts partly “came from the idea of teleporting in VR. One minute you’re in this environment and the next you can teleport somewhere else.” So, what if you were stuck between two places?
The designer also has a fascination with computer glitches and things simulated wrong. “Everyone has this idea of digital simulations being so perfect. It’s almost too perfect. And these glitches give that sort of organic aspect to it. That’s what makes digital relatable for me.” Lau’s collection explores the physical world “through errors in machines trying to understand and differentiate between everyday objects.” For an example of the limitations of AI, Lau cited the time in 2017 when people tested whether an AI could tell the difference between a blueberry muffin and a Chihuahua, with mixed results.
People-watching on their commutes as research, Lau documented figures such as a 9-5 working man in a suit, a “dude in streetwear”, a grandma in the grocery aisle or an office woman with a “nice handbag”. Gamifying mundane spaces, these real people become “recognisable archetypes”, like non-player characters in a video game. (Going further down the rabbit hole of the gamification of real life, there’s the trend of NPC videos where clips of absurd real-life scenarios are overlaid with video game music. An altercation with an angry stranger becomes a side character impeding your mission.) The clothes themselves are “recognisable, everyday garments”, but they wanted to simulate them in a way that is only possible in the digital space.
“Off to the shops!” is a look consisting of a hoodie, puffer jacket and jeans. The model rides a floating skateboard in the form of a “Great Prices” sign, other times an Oyster card. Lau has an irreverence and a sense of humour in their work that channels the absurd irony of the meme generation. The hood and drawstrings float upwards, while the jeans billow out as if filled with air. The puffer jacket has an oscillating pattern of the print you’d find on your Tube seat. (Complete with a glowing red London Eye.) Using Substance Painter, the TFL puffer has a base image which is animated to scroll across the surface, giving the effect of the moving print.
Inspired by the persistent moth problem in Lau’s kitchen, the suit look features the houndstooth pattern flying off the jacket, swirling around the model. Aptly titled, “A Moth Problem,” the idea came from a YouTube tutorial of how to animate a moth particle system in 3D computer graphics software Blender. (Like many, much of Lau’s knowledge comes from hours spent watching tutorials and processes of trial and error.) They created a moving houndstooth sequence to use as a video print and then applied individual animation frames to create the texture using Substance Painter.
The “Twitchy Trench Coat” is animated using shape key animation in Adobe Mixamo to pulsate and twitch on the body: “I imagined it sort of like an ill fit. As if the trench coat is wearing you.” And the huge Airpods atop the hat and matching case bag? “That was a 2am idea.” The shifting print on the trench was made with StyleGan2, a Generative Adversarial Network which learns patterns from inputted data and then outputs its own. Wanting to explore prints as living organisms, Lau’s process resulted in a generative morphing print trained on a dataset of timeless patterns like tartans and florals.
Another example of machine learning applied to fashion was in 2018, when Robbie Barrat trained an AI by feeding it existing Balenciaga looks to generate a new collection. The growing popularity of websites that utilise AI to create art (like Artbreeder and Dall-E 2) has generated discourse about the obsolescence of image-makers. “It kind of puts into question the validity of the artists. Are you making art? Is the AI making art?” says Lau. On a lighter note, Dall-E Mini has been blowing up recently due to being used for memes. Able to create any scenario you type in, internet users have got Gandalf to do vape tricks.
Tongue-in-cheek Lau’s collection may be, it takes meticulous work. Firstly, the avatars are created in Metahuman Creator, and the head and body mesh combined in Blender. Once the clothes are simulated in CLO3D, Autodesk Maya is used for the retopology process, which means simplifying the geometry of the garments so they run more smoothly in real-time. Next, details, folds and drapes that would otherwise be lost are sculpted into the model “like clay”, using digital sculpting tool ZBrush. Material textures are added using Substance Painter. The rigging of the models (essentially creating a skeleton to animate the models) is done on Adobe Mixamo. Finally, the characters are placed into the environments in Unreal Engine for the VR experience. For the development of the AR Instagram and Snapchat try-on filters they used Spark AR and Lens Studio.
Despite creating clothes digitally, Lau initially makes a physical toile from paper, as “there’s something quite intuitive about working with your hands… I’m so hands-on as far as pattern cutting goes since that’s how I know to create shape.” Moments like these are when you remember that Lau comes from a background of creating physical fashion. They only started creating digitally during their industry placement last year, working at start-up Digital Fashion Framework.
This collection taps into the zeitgeist of the recognisable turned uncanny, like social media’s obsession with liminal spaces. They are transitional spaces like hallways and elevators, or places that are usually full of life grinding to a deathly silence: empty train stations, malls, a cafĂ© at 4am. Similarly, AI-generated images cause discomfort due to being at once familiar and unnatural. In 2019, a Tweet went viral, challenging users to “name one thing in this photo”. A first glance seems to reveal recognisable objects, but the longer you look, the less it makes sense. Suspected to be generated by a neural network trained on real-life objects, the result is detached from real life but eerily familiar, like something you can’t quite put your finger on, or the moment you realise you’re in a dream.
For Lau, pursuing digital fashion is a bit of a “strategic choice”. “Am I going to be able to do something creative, but also be able to support myself?” they mused. With digital fashion, “there’s opportunity there.” Sitting at the intersection between art and technology, “it’s creative, but also part programming.” The digital space offers you complete creative control. In real life you choose your fabrics, but digitally, you simulate everything, from their surface patterns to the way they move and drape. “It’s not just about the garment itself, it’s this idea of world building. Almost like you’re a God,” Lau laughs.
Going to a marriage counseling session makes a difference in couples discovering arrangements rapidly and successfully. Cincinnati Marriage Counselors
ReplyDelete