Skip to main content
Tags: Next Political Campaign Could Die on the Hill of AI

Torrenzano: Electorates' Trust Could Die on Hill of AI

By    |   Wednesday, 30 April 2025 12:09 PM EDT

Our Politics Struggle Against an Age of Deepfakes

As the U.S. barrels toward the 2026 midterms and the 2028 presidential race, the most dangerous threat to democracy may not come only from hostile nations or bad policies but from "perfectly believable lies."

Thanks to artificial intelligence (AI), it’s now possible to create audio, video and images so real they show public figures doing or saying things they never did — and make millions believe it.

Deepfakes aren’t just digital forgeries.

They’re weapons and will be expanded into political warfare where truth is optional and outrage is instant.

This Isn’t the Collapse of Truth, It’s the Corrosion of Trust

And unlike truth, which can often be restored with facts, trust is harder to repair once lost.

Back in 1710, Jonathan Swift warned that"falsehood flies, and the truth comes limping after it" — a line that now reads like a prophecy for the deepfake era.

Deepfakes use machine learning to mimic voices, clone faces and fabricate scenes.

They can be deployed for satire or art—but in politics, they are increasingly used to mislead, manipulate and destabilize.

  • Audio deepfakes replicate a person’s voice, often from just seconds of recording. Scammers use these to impersonate business leaders and celebrities. In the political arena, they can be deployed as fake phone calls, endorsements or confessions—almost impossible to verify in real time.
  • Video deepfakes depict public figures saying or doing things they never did. A single fabricated clip could derail a campaign, ignite violence or swing an election—before the truth catches up.
  • Photographic deepfakes create or alter images so convincingly they can simulate crimes, rallies, documents or physical evidence. In the wrong hands, they are extraordinary propaganda tools.

Have Deepfakes Already Entered the Race, outrage is real ---  and instantly reshape polls

In February 2024, a robocall in New Hampshire used a deepfake version of President Biden’s voice to tell Democrats not to vote in the primary.

It was a targeted effort to suppress turnout produced for as little a few hundred dollars.

This wasn’t a prank. It was a low-cost, high-impact political attack… and it worked well enough to raise alarms. But it is only the beginning.

What happens in 2026 if a video emerges days before Election Day showing a Senate candidate making racist comments — only for it to be proven fake after the polls close?

What if a deepfake video shows a prominent lawmaker endorsing a controversial foreign policy stance  . . .  or being bribed by donors?

The capacity to fabricate events, scandal, or betrayal is in the hands of anyone with a graphics card and an agenda.

Today’s voting public and political institutions are not ready for this. 

Most major campaigns have expanded media monitoring and assembled "rapid response" teams. But these are defensive tools from a bygone era — built around even today’s media cycles, not viral synthetic lies. In the age of AI-fueled deception, reactive strategies are too slow, too soft… and fatally too late.

Political teams still rely on systems built to flag misleading headlines and out-of-context quotes or soundbites — not fully synthetic video and audio designed to deceive at scale.

Technology has advanced, but the strategy hasn’t. And in a fight where perception outruns proof, delay isn’t just risky — it’s costly.

Malicious actors are often the first to weaponize emerging technologies…long before institutions can defend against them

Despite growing awareness, the nation’s political infrastructure remains dangerously underprepared and unarmed.

There are still no federal standards for verifying political content, no structured partnerships between campaigns and tech platforms to detect or remove deepfakes… and few campaigns have seasoned deepfake experts ready when it matters most.

Voters, meanwhile, are flying blind. Public education on synthetic media is virtually nonexistent, leaving millions with no tools to distinguish real from fake.

And when deepfakes are caught, the legal system lags far behind. U.S. law offers limited protection unless the content involves defamation or commercial harm. Most political deepfakes are designed to slip through the cracks—engineered to be effective, deniable and just legal enough to survive.

Pope in a puff coat, Swift in a scandal and your CEO next in line

In the last year, some of the most viral deepfakes have come from the entertainment and cultural sphere: Pope Francis in a Balenciaga-style puffy coat, Taylor Swift in AI-generated pornography, fake images of Katy Perry at the 2024 Met Gala and a deepfake of rapper Drake feuding with another artist using synthetic lyrics.

Most recently, more than 200 musicians, including Perry, Billie Eilish and Bon Jovi, signed an open letter condemning the exploitation of their voices, sabotaging creativity and likenesses by AI.

In Hollywood, these incidents grabbed headlines, created synthetic media shocks and racked up millions of views—but none altered elections, legislation or national security.

In politics, it strategically destabilizes.

Combating deepfakes is not a messaging problem. It is a communications issue…but cannot be managed with clever tweets or last-minute news briefings.

It’s a challenge to the foundation of political reality—one that demands foresight, planning and people who understand how synthetic media is made, how it spreads and how to shut it down. This isn’t about spin control. It’s about preventing a lie from becoming the story before the truth even gets a hearing.

Finally, Congress may be stepping in. On February 12, the Senate unanimously passed the bipartisan Take it Down Act, criminalizing non-consensual intimate imagery, targeting deepfake-driven abuse, marking the first AI-related bill to advance in Congress. Thirteen have been introduced, but rather than sweeping regulation, Congress is taking a piecemeal approach — aiming at harms, not frameworks.

Nonetheless, if lawmakers fail to move faster, the next national crisis won’t come from a real event — it’ll come from a fake one that millions believe.

##

# of words 867

Richard Torrenzano is chief executive of The Torrenzano Group. For nearly a decade, he was a member of the New York Stock Exchange management (policy) and executive (operations) committees. He is the co-author of the bestselling Digital Assassination and his new book will be launched in May: Command the Conversation: Next Level Communications Techniques

© 2026 Newsmax. All rights reserved.


Politics
Truth can often be restored with facts, trust is harder to repair once lost. The capacity to fabricate events, scandal, or betrayal is in the hands of anyone with a graphics card and an agenda. Today’s voting public and political institutions are not ready for this. 
Next Political Campaign Could Die on the Hill of AI
1018
2025-09-30
Wednesday, 30 April 2025 12:09 PM
Newsmax Media, Inc.

Sign up for Newsmax’s Daily Newsletter

Receive breaking news and original analysis - sent right to your inbox.

(Optional for Local News)
Privacy: We never share your email address.
Join the Newsmax Community
Read and Post Comments
Please review Community Guidelines before posting a comment.
 
TOP

Interest-Based Advertising | Do not sell or share my personal information

Newsmax, Moneynews, Newsmax Health, and Independent. American. are registered trademarks of Newsmax Media, Inc. Newsmax TV, and Newsmax World are trademarks of Newsmax Media, Inc.

NEWSMAX.COM
America's News Page
© Newsmax Media, Inc.
All Rights Reserved
Download the Newsmax App
NEWSMAX.COM
America's News Page
© Newsmax Media, Inc.
All Rights Reserved