
Think Fake News Is Bad? Take a Look at Deepfake
Deepfakers use a form of Artificial Intelligence (AI) called deep learning to create fake events. These fake events can be a politician making a statement he/she did not utter. Deepfakes have already been used in social media like Facebook and YouTube depicting fake events that were either false or did not happen.
Danielle Citron, a Boston University law professor said, “Deepfake technology is being weaponized against women.”
Porn videos are where the real damage is done as deepfake using the faces of real female celebrities mapped onto faces of porn stars. In September 2019, AL firm Deeptrace found 15,000 deepfake videos online and 96% were pornographic.
Not Just Videos
Deepfake technicians can create photos from scratch that are as convincing as they are fake. For example, “Maisy Kinsley,” a fake Bloomberg journalist was created with profiles on both LinkedIn and Twitter.
Another fake profile that appeared on LinkedIn was named “Katie Jones.” Jones claimed to work for the Center for Strategic and International Studies, but is believed to be part of a foreign spy operation.
Audio is another product of the deepfake community where voices of public figures are created. In one case, the UK chief of a German energy company was scammed out of £200,000 when the voice of his boss was deepfaked.
The Written Word Can Be Deepfaked
Deepfaked written words can be AI-produced to mimic human writings and be very convincing. The evil element of the world can spread their venom in deepfaked articles that contain propaganda directed to disrupt social tranquility. OpenAI, a non-profit research company, has created an AI system that can generate page-long text when prompted.
The AI can mimic fantasy prose, fake celebrity news, or even create deepfake written assignments.
Scary right?
As Hao Li, a professor at USC, puts it, “At some points, likely, it’s not going to be possible to detect [AI Fakes] at all.”
OpenAI has decided not to release this product as they fear the text generator will be misused, and rightfully so. The decision stems from the growing concern in the technology community about creating cutting edge technology without setting limits on how it is used. Malicious actors do not create advanced technology with the good of mankind at heart, but more about serving their own greed.
All Is Not Lost
Researchers have recently created tools to detect deepfakes with 90% accuracy and that is a good thing. Experts believe, as with computer viruses, for every cure comes another hybrid to take its place. Like cybercrime, it is not going away anytime soon.
The techniques that aid the deepfake detectors to spot the fakes have to do with the deepfake generators not recognizing human blink patterns. The creators of the deepfake detectors made public the blinking issue with deepfake videos and it was not long when deepfake videos had the cloned actors blinking.
There are some researchers out there that keep brainstorming how to defeat the deepfakers. One such idea is to develop programs that will automatically watermark and identify images taken with a camera.
Another technique is to use blockchain technology to verify content for sources that are trusted. Let’s hope that this deepfake conspiracy can be solved promptly, otherwise, the credibility of the web is at stake.
Sources
Ian Sampale. Mon 13 Jan 2020 05.00 EST. AI-generated fake videos are becoming more common (and convincing). Here’s why we should be worried. By the Guardian
Kyle Wiggers. February 11, 2020, 9:36 AM.
Deepfakes and deep media: A new security battleground. VB.
Rachel Metz. February 18, 2019.
This AI is so good that its creators won’t let you use it.
CNN Business.
James Vicent. June 27, 2019.
Deepfake detection algorithms will never be enough: Spotting fakes is just the start of a much bigger battle.
The Verge.
Call Chill IT on 1300 726 679