logo
Will 2020 Be the Year for “Deepfakes”?

January 06, 2020

Will 2020 Be the Year for “Deepfakes”?

What is “Deepfake” exactly?

Deepfakes are videos created using artificial intelligence (AI) technology. As their name suggests, those videos depict people saying or doing things they never really said or did. Such videos firstly appeared in pornography, however, now they are being used for entertainment purposes, and even more dangerous as propaganda and political weapons. 

One example of how manipulating those videos can be is the video created for former United States President, Barack Obama, by Director and comedian Jordan Peele in collaboration with Buzzfeed. The intention behind creating this video was to show how dangerous a Deepfake video could be.

How are they made?

There are specific free to download apps like Fakeapp and DeepFaceLab, designated for creating Deepfakes, however, creating convincing ones needs massive effort and time, even for tech experts. 

Why are they made?

The main intention for creating such videos is to spread false information and mislead people. People creating such videos usually have an agenda to promote. However, as they are entering the social media and entertainment platforms, they are becoming harder to spot, as they slowly become more common. This is certainly creating problems for governments and the tech industry. For the discussed reasons, Deepfakes caught the attention of American politicians. In July 2019, Adam Schiff; “U.S. House Representatives Intelligence Committee Chairman”, wrote letters to the CEOs of Google, Facebook, and Twitter, asking about the formal policies they abide by concerning Deepfakes, and the technologies the companies are developing to detect them. 

However, some people believe that introducing such a feature into social media for entertainment will raise awareness of the technology and will accordingly make users more aware that they shouldn’t instantly believe everything they see online.

How can they be spotted?

Many tech companies are developing sophisticated algorithms for the sake of spotting Deepfakes, as it may be hard for humans to do so. One example is the creator of Photoshop; software company Adobe, which partnered with the University of California, Berkeley researchers, to train AI on recognising Deepfakes. 

Social media platforms working on new ‘Deepfake’ type features

Just at the beginning of 2020, Snapchat bought a computer vision start-up AI Factory for $166 million.  This is the same company Snapchat worked with recently to launch its Cameo feature, which enables users to overlay their face over a selection of pre-made scenes.  This acquisition means that Snapchat is looking to further advance along with the ‘Deepfake’ type features. Similarly, it is also heard that TikTok is working on TikTok is working on adding a Deepfake-style within its app, which will enable users to create videos with their own face overlaid onto pre-recorded footage.

TikTok is working on creating a Deepfake-style feature that is more direct.  This feature will enable users to add their image into a selection of videos, after asking them to take a multi-angle, biometric scan of their face. TikTok’s new tool appears to be similar to ZAO; a Chinese video editing app, which enables users to upload their pictures to a series of movie scenes. ZAO went viral last September, despite the security and privacy concerns raised around it, noting that it was previously blocked by China’s WeChat for presenting “security risks.”

Taking into the record all the possible threats Deepfakes videos could impose, it may look a little odd that Snapchat and TikTok are working to develop separate variations of the feature into their platforms, especially that Google, Twitter, and Facebook are all independently working on conducting preventive research on ways to detect Deepfakes. 

Cameos feature of Snapchat is more cartoonish and based on animation more than reality, so it is hard to draw serious concerns around it. What appears to be more concerning, however, is the TikTok’s variation of the feature. Worries seem to circle around the biometric data uploaded within TikTok, and particularly “Douyin”; the Chinese version of the app, where such data could be accessed by the Chinese Government for use in identification and tracking purposes.

These concerns come after human rights groups criticized  China’s advanced surveillance measures, where the Chinese authorities were reportedly using digital face scans to track and control the activities of Uighur Muslims in the country. It is said that the surveillance network in the country comprises over 170 million CCTV cameras equipped with advanced facial recognition capabilities, which is the equivalent to one camera for every 12 people in China.  

However, ByteDance; TikTok’s parent company, denied any intention for introducing a Deepfake tool to the app and reported to TechCrunch: “This is definitely not a function in TikTok, nor do we have any intention of introducing it. I think what you may be looking at is something slated for Douyin – your email includes screenshots that would be from Douyin and a privacy policy that mentions Douyin.”  

Stay tuned for the latest social media updates, check out #SocialSpeak every week!

You also may like

أيلون ماسك وتويتر: معركة مليئة بالألغاز والخفايا

February 26, 2023

أيلون ماسك وتويتر: معركة مليئة بالألغاز والخفايا

READ FULL
Musk vs. Twitter: A timeline of ups and a lot of downs

February 21, 2023

Musk vs. Twitter: A timeline of ups and a lot of downs

READ FULL
FMCG in the Middle East: Navigating the Desert Market

February 14, 2023

FMCG in the Middle East: Navigating the Desert Market

READ FULL