As this type of AI learns how to complete its task better as it gets more data, it will be more likely to produce a realistic video of politicians who are always on camera. These systems typically work, as researchers have demonstrated before, by taking an original video of someone speaking and morphing it into the politician.

Since the AI has seen a politician’s face in so many combinations of expressions and orientations, it’s able to predict what their face would look like if they were making the same expression as the person in the original video. By making thousands of these predictions and stitching them into a video, a new video with the face of a politician is generated. Companies like Lyrebird are also working to clone a person’s voice, and the audio needed to train that algorithm can also be taken from videos of politicians.

As far as policing this technology, Facebook and Twitter seem unprepared. Dorsey had nothing to say on the subject, and Sandberg said that Facebook would explore the technology.

“Deepfakes is a new area, and we know people are going to continue to find new [areas of deceptive technology],” Sandberg said. “It’s a combination of investing in technology and investing in people.”

Given Facebook’s struggle to even educate its own moderators to police content on the site, expecting people with little expertise in image authenticity analysis to catch fake video might be a stretch.

📬 Sign up for the Daily Brief

Our free, fast, and fun briefing on the global economy, delivered every weekday morning.