Deepfake

From Hidden Wiki
Jump to navigation Jump to search

Deepfake (a portmanteau of "deep learning" and "fake"[1]) is a technique for human image synthesis based on artificial intelligence. It is used to combine and superimpose existing images and videos onto source images or videos using a machine learning technique called a "generative adversarial network" (GAN).[2] The combination of the existing and source videos results in a video that can depict a person or persons saying things or performing actions that never occurred in reality. Such fake videos can be created to, for example, show a person performing sexual acts they never took part in, or can be used to alter the words or gestures a politician uses to make it look like that person said something they never did.

Because of these capabilities, deepfakes have been used to create fake celebrity pornographic videos or revenge porn.[3] Deepfakes can also be used to create fake news and malicious hoaxes.[4][5]

Pornography

Deepfake pornography surfaced on the Internet in 2017, particularly on Reddit,[6] and has been banned by sites including Reddit, Twitter, and Pornhub.[7][8][9] In autumn 2017, an anonymous Reddit user under the pseudonym "deepfakes" posted several porn videos on the Internet. The first one that captured attention was the Daisy Ridley deepfake. It was also one of the more known deepfake videos, and a prominent feature in several articles. Another one was a deepfake simulation of Wonder Woman actress Gal Gadot having sex with her step-brother, while others were of celebrities like Emma Watson, Katy Perry, Taylor Swift or Scarlett Johansson. The scenes were not real, having been created with artificial intelligence. They were debunked a short time later.

As time went on, the Reddit community fixed many bugs in the faked videos, making it increasingly difficult to distinguish fake from true content. Non-pornographic photographs and videos of celebrities, which are readily available online, were used as training data for the software. The deepfake phenomenon was first reported in December 2017 in the technical and scientific section of the magazine Vice, leading to its widespread reporting in other media.[10][11]

Scarlett Johansson, a frequent subject of deepfake porn, spoke publicly about the subject to The Washington Post in December 2018. In a prepared statement, she expressed concern about the phenomenon, describing the internet as a "vast wormhole of darkness that eats itself." However, she also stated that she wouldn't attempt to remove any of her deepfakes, due to her belief that they don't affect her public image and that differing laws across countries and the nature of internet culture make any attempt to remove the deepfakes "a lost cause"; she believes that while celebrities like herself are protected by their fame, deepfakes pose a grave threat to women of lesser prominence who could have their reputations damaged by depiction in involuntary deepfake pornography or revenge porn.[12]

In the United Kingdom, producers of deepfake material can be prosecuted for harassment, but there are calls to make deepfake a specific crime;[13] in the United States, where charges as varied as identity theft, cyberstalking, and revenge porn have been pursued, the notion of a more comprehensive statute has also been discussed.[14]

Politics

Deepfakes have been used to misrepresent well-known politicians on video portals or chatrooms. For example, the face of the Argentine President Mauricio Macri was replaced by the face of Adolf Hitler, and Angela Merkel's face was replaced with Donald Trump's.[15][16] In April 2018, Jordan Peele and Jonah Peretti created a deepfake using Barack Obama as a public service announcement about the danger of deepfakes.[17] In January 2019, Fox television affiliate KCPQ aired a deepfake of Trump during his Oval Office address, mocking his appearance and skin color.[18]

Deepfake software

In January 2018, a desktop application called FakeApp was launched. The app allows users to easily create and share videos with faces swapped. The app uses an artificial neural network and the power of the graphics processor and three to four gigabytes of storage space to generate the fake video. For detailed information, the program needs a lot of visual material from the person to be inserted in order to learn which image aspects have to be exchanged, using the deep learning algorithm based on the video sequences and images.

The software uses the AI-Framework TensorFlow of Google, which among other things was already used for the program DeepDream. Celebrities are the main targets of such fake videos, but some other people are also affected.[19][20][21] In August 2018, researchers at the University of California, Berkeley published a paper introducing a fake dancing app that can create the impression of masterful dancing ability using AI.[22][23]

There are also opensource alternatives to the original FakeApp program, like DeepFaceLab,[24] FaceSwap (currently hosted on GitHub)[25] and myFakeApp (currently hosted on Bitbucket).[26][27]

Criticisms

Abuses

The Aargauer Zeitung says that the manipulation of images and videos using artificial intelligence could become a dangerous mass phenomenon. However, the falsification of images and videos is even older than the advent of video editing software and image editing programs; in this case it is the realism that is a new aspect.[15]

It is also possible to use deepfakes for targeted hoaxes and revenge porn.[28][29]

Effects on credibility and authenticity

Another effect of deepfakes is that it can no longer be distinguished whether content is targeted (e.g. satire) or genuine. AI researcher Alex Champandard has said everyone should know how fast things can be corrupted today with this technology, and that the problem is not a technical one, but rather one to be solved by trust in information and journalism. The primary pitfall is that humanity could fall into an age in which it can no longer be determined whether a medium's content corresponds to the truth.[15]

Internet reaction

Some websites, such as Twitter and Gfycat, announced that they would delete deepfake content and block its publishers. Previously, the chat platform Discord blocked a chat channel with fake celebrity porn videos. The pornography website, Pornhub, also plans to block such content; however, it has been reported that the site has not been enforcing its ban.[30][31] At Reddit, the situation initially remained unclear until the subreddit was suspended on February 7, 2018 due to the policy violation of "involuntary pornography".[11][32][33][34] In September 2018, Google added "involuntary synthetic pornographic imagery” to its ban list, allowing anyone to request the block of results showing their fake nudes.[35]

References

External links