Fake It ‘Till You Make It

Flat World Partners
4 min readDec 21, 2018

Let’s Talk Artificial Intelligence

Nvidia, a California-based technology company which has been speeding ahead in the AI development race, has recently launched a new neural network capable of creating synthetic imagery of human faces, which makes them indistinguishable from the real thing. The technology is based on Generative Adversarial Networks (GAN) which make it possible to fabricate faces, or even common objects like cars, given a sufficient number of photographs to train the artificial neural networks. GANs were first introduced by Ian Goodfellow and his team of researchers a mere four years ago, and since then the field has seen an explosion in terms of interest and performance. The first GANs produced images of simple objects, but they contained artifacts making it easy to distinguish them from real images. However, the human brain is exceptional in its ability to read and process human faces, underscoring the achievement of the Nvidia researchers. Using a technique known as “style transfer” (think Snapchat filters, on steroids), the model (powered by eight Tesla GPUs) took under a week to be trained to produce these seamless face variations, of sufficient quality to be indistinguishable from real faces. Whilst these images are currently static “photographs”, the road to video is a slippery slope with hyper-realistic full body simulations not far off. Deepfakes refers to this whole category of using deep learning to produce fake media. Thus, it includes these fake images, the fake voices from lyrebird.

Beyond being able to use these photorealistic human renderings in gaming and movies (no more cliff-climbing required, Tom Cruise), fake news might be about to get a lot more fake. A picture may be worth a thousand words now, but there are fears that this technology may begin to erode the current trust and conviction provided by pictorial evidence, posing a threat in communications and media, violating human rights and driving political rumor-mongering.

Whilst everyone from governments to human rights investigators to social media companies attempt to develop methods for assessing the truth in digital evidence, it has become somewhat a methodological arms race. If the historical emergence of cybersecurity is any indication, identification of fakes cannot be left up to the end user (thanks, social media) nor achieved solely through technical prevention from online platforms. Protecting against fraudulent use of this technology with require both user education and technical countermeasures, and strategies on a national level to prevent fake news.

Hayley Mole, Associate & Casey Conger, Data Scientist

Although it’s been confirmed that altered videos did not play a part in this U.S. Midterm Election, the same cannot be said for the imminent future. Misinformation can play a vital part in voters decision-making, and researchers are betting on a true Deep Fake political scandal any day now.

We’ve got artificial faces down, but how about artificial voice? Lyrebird allows you to create your own vocal avatar to reproduce your own voice (aimed to help those who have lost their voice due to disease or injury), but will also allow you to imitate the voice of someone else.

The Flat World Partners team undertook a book swap of epic proportions at our holiday party this week, inspired by The Office’s version of Secret Santa. Among the most coveted titles: Becoming (Michelle Obama), Skin in the Game (Nassim Nicholas Taleb), The Giving Tree (Shel Silverstein) and Surely You’re Joking, Mr. Feynman (Richard P. Feynman).

This newsletter is intended solely for informational purposes, and should not be construed as investment/trading advice and are not meant to be a solicitation or recommendation to buy, sell, or hold any securities mentioned. Any reproduction or distribution of this document, in whole or in part, or the disclosure of its contents, without the prior written consent of Flat World Partners is prohibited

--

--