Three Reasons Why Canadians Should Know About Deepfakes Now

Post by
Tuesday, September 17, 2019
Posted in Latest News

On September 5, 2019, Facebook’s current Chief Technology Officer Mike Schroepfer broke the news that the company will be participating in the launch of a challenge to detect deepfakes

What are “deepfakes”? You may have heard about them in the last two years because Nicolas Cage’s face has begun appearing in clips of movies and television shows he’s never actually starred in, such as Raiders of the Lost Ark or the 1990s hit sitcom Friends. Or perhaps you’ve seen the video by director Jordan Peele in which Barack Obama makes an uncannily realistic appearance delivering a PSA about fake news. 

“We’re entering an era in which our enemies can make it look like anyone is saying anything at any point in time,” says the ventriloquized Obama in the video, “even if they would never say those things.” 

As you may have deduced, deepfakes are videos that depict people doing or saying fictional things. Using AI techniques involving machine learning and generative adversarial networks (GANs), people who create deepfake videos superimpose one person’s face onto another, or make it look like a person is doing something they actually never did in reality. 

Deepfakes are important to know about for at least three reasons. 

The first is that deepfakes have already had tremendous impact on the people — and particularly women — whose faces have been superimposed onto clips of pornography. Celebrity actresses were particularly targeted in 2018, when the likes of Emma Watson and Natalie Portman appeared without their consent in doctored sexually explicit videos shared en masse online. It’s worthwhile to remember that the first examples of deepfakes involved the manipulated depictions of women’s bodies again without their consent, and the results can be devastating for those involved. 

The second reason concerns political interference. Deepfakes are also poised to pose significant challenges to ensuring that media ecosystems are free from erroneous information. A quick search online of “politics” and “deepfakes” in 2019 brings to light news articles galore riding the latest beat focused on the intersection of deepfakes, elections, and the possibility of misinformation. This is because such manipulated (yet oft realistic) video material clearly has the power to sway voter perceptions of a given politician’s views and influence a person’s decision to vote a certain way in times of election. 

Here in Canada, the CBC reported in June 2019 that deepfakes of Canadian politicians had already emerged online. A person using the moniker of “FancyScientician” told the CBC that their videos of Conservative party leader Andrew Scheer and Ontario premier Doug Ford were only meant to invoke “learning and laughter.” These videos nonetheless demonstrate the ease with which such image-based material online may mislead voters in the 2019 federal election

Lastly, it is unclear in Canada whether our current laws adequately combat the harms engendered by deepfakes. Law, gender, and equality expert Suzie Dunn — and speaker on the 2019 CIAJ Annual Conference student panel — has highlighted that Canadian laws regarding defamation, criminal harassment, and identity theft may allow for individual recourse in response to deepfakes but only in certain contexts. 

A research paper published in Canada’s Library of Parliament shows increasing awareness of the impact held by deepfakes on the Canadian political climate. The paper suggests that some Canadian election laws might possibly capture this issue: for example, deepfakes may constitute the publication of “false statements” to affect election results (s. 91(1)); the distribution, transmission, or publication of misleading material (s. 481(1)); or the impersonation of certain election officials (s. 480.1(1)). It nonetheless remains unclear whether these laws will be sufficient to prevent malicious actors from using deepfakes to manipulate election results, such as those in our upcoming federal election.

Can we rely on tech industry initiatives (such as deepfake detection challenges) or carefully circumscribed election provisions to adequately respond to falsified videos before they wreak significant havoc on individuals or our democratic institutions? I am also honoured to be speaking at the CIAJ Annual Conference student panel this year, and look forward to exploring these pressing and thorny issues in person with all those interested in October.


Yuan Stevens is part of the speakers who will participate in CIAJ’s 44th Annual Conference “The Impact of Artificial Intelligence and Social Media on Legal Institutions,” which will take place in Quebec City from October 16-18, 2019.

About the author

Yuan Stevens

Yuan Stevens

Yuan (you-anne) Stevens is a research consultant specializing in public interest law, emerging technology, and computer security. She currently works as a Research Officer at the Cyberjustice Laboratory, housed at the Faculty of Law of Université de Montréal, where she examines the impact of artificial intelligence on access to justice for vulnerable populations. She received her B.C.L./LL.B from McGill University in 2017. One of her current research projects is an ethnographic study focused on the labour experiences of hackers who participate in crowd-sourced vulnerability disclosure. She serves on the board of directors for Open Privacy Research Institute, Head & Hands in Montreal, and previously worked at the Berkman Klein Center for Internet & Society at Harvard University.