With the 2024 elections a little over a year away and the Iowa caucuses in view, experts are warning of the potential impacts of deepfake technology on the 2024 U.S. elections.
Deepfakes — artificial intelligence-powered manipulated media of individuals saying or doing things they never actually did — are on the rise.
Due to advancements in technology, they are becoming harder to detect.
"This is the year that photographic and video evidence ceases to work, and our institutions have not caught up to that yet."
Aza Raskin, co-founder of the Center for Humane Technology, stated during Summit at Sea in May 2023. "This is the year that you do not know when you talk to someone that you're actually talking to someone- even if you have video. Even if you have audio."
In May of 2023, it was reported that Turkish presidential candidate Kemal Kılıçdaroğlu faulted Russia for releasing deepfakes just days before the Turkish presidential election.
Another Turkish presidential candidate, Muharrem Ince, reportedly withdrew his bid from the same election after an alleged sex tape surfaced, which he claims is a deepfake.
It's reported that Ince adamantly denounced the video calling it "slander," "not real," and adding, "What I have seen in these last 45 days, I have not seen in 45 years."
In March of 2022, nearly a month after Russia invaded Ukraine, a deepfake surfaced of Ukrainian President Volodymyr Zelenskyy urging Ukrainian soldiers to surrender to Russian invasion.
In June 2023, reports of a deepfaked Russian President Vladimir Putin declaring martial law aired on Russian radio and television, and in Venezuela, AI-generated videos are reportedly disseminating political propaganda.
It's not only A.I.-altered video that raises concerns for elections, but deepfake audio, as well. In India, two audio clips were reportedly released by K. Annamalai, state head of India's ruling party, of Palanivel Thiagarajan, a politician from Tamil Nadu, allegedly accusing his own party of corruption and praising his opponent.
Thiagarajan responded via Twitter, referencing a deepfaked song using popular music artists Drake and The Weekend.
"In case anyone thinks fabricating a 26-second low-quality clip is hard these days — an example of a whole song that got ~16 MILLION views on multiple platforms, which turned out to be fabricated with 'AI-generated vocals.'
"NEVER trust an Audio clip without an attributable source."
Only three seconds of bona fide audio is required to create a voice clone, and because of highly developed A.I. technology, this type of synthetic media is also growing more elusive to detection.
Research distributed in PLOS ONE, a peer-reviewed open-access scientific journal published by the Public Library of Science (PLOS), indicates that deepfake audio deceives the listener almost 25% of the time.
More concerning is that deception rates potentially increase if the listener doesn't expect to encounter digitally manipulated media.
Experts are also warning of the converse: media that is, in fact, genuine being labeled as deepfakes.
In Myanmar, it is reported that real evidence of human rights violations was challenged by the army, who cited it as fake.
Two audio clips released by K. Annamalai purportedly were subject to three independent rounds of forensic analysis by Deepfakes Rapid Response Force, an initiative by WITNESS, which allows journalists and credentialed fact-checkers to "escalate cases of suspected deepfakes and get a timely assessment on the authenticity or origin of the content."
Analysts, leading media forensics professionals, and deepfake experts were divided on the authenticity of the first clip, finding the quality insufficient to draw a conclusion or concluding that the clip was "very likely fake."
However, they all agreed on the veracity of the second clip and deemed it to be authentic.
Considering the rapid development and widespread abuse of deepfakes and their impact abroad, it's apparent why concerns exist over the use of them to interfere in the U.S. political process.
In November of 2022, it was reported that the Pentagon contracted DeepMedia, a Silicon Valley-based startup, to leverage its deepfake detection technologies.
Reports are that DeepMedia is to provide "rapid and accurate deepfake detection to counter Russian and Chinese information warfare."
DeepMedia boasts that its A.I. can analyze both face and voice through detection algorithms across 50 different languages and determine digital manipulation with 99.5% accuracy.
As elections approach in the U.S., Americans need to be on guard against deepfakes, educated about their existence, how they can be manipulated, and their potential risks.
Verify and fact-check information before sharing.
Question the content's source, context, and consistency before accepting information as true, even if there is video and audio. Deepfake detection tools like those developed by DeepMedia and available through WITNESS' Deepfake Rapid Response Force will be needed for the 2024 elections and beyond.
We live in a world in which digital deceptions exist and A.I. and deepfakes have completely changed the game.
Remain alert and aware.
V. Venesulia Carr is a former United States Marine, CEO of Vicar Group, LLC, and host of "Down to Business with V.," a television show focused on cyberawareness and cybersafety. She is a speaker, consultant, and news commentator providing insight on technology, cybersecurity, fraud mitigation, national security, and military affairs. Read more of her reports — Here.
© 2024 Newsmax. All rights reserved.