Deepfakes Getting Easier to Make

A new research paper shows how an algorithm to create fake video clips of real people can be trained with a single photo

To make a convincing deepfake — an AI-generated fake of a video or audio clip — you usually need a neural model that’s trained with a lot of reference material. Generally, the larger your dataset of photos, video, or sound, the more eerily accurate the result will be. But now, researchers at Samsung’s AI Center have devised a method to train a model to animate with an extremely limited dataset: just a single photo, and the results are surprisingly good.

The researchers are able to achieve this effect, (as spotted by Motherboard) by training its algorithm on “landmark” facial features (the general shape of the face, eyes, mouth shape, and more) scraped from a public repository of 7,000 images of celebrities gathered from YouTube.

Read More at The Verge

Read the rest at The Verge