Skip to main content
Tags: audio | deepfake | mimicry
OPINION

I Let AI Mimic My Voice, Not Speak My Mind

I Let AI Mimic My Voice, Not Speak My Mind

(BiancoBlue/Dreamstime.com)

Larry Bell By Monday, 05 January 2026 12:03 PM EST Current | Bio | Archive

AI's expanding omnipresence in our lives is a double-edged sword, presenting both remarkable opportunities and significant privacy and security challenges.

Having mused and written on numerous occasions about disruptive AI influences upon the future of humanity, work roles, education, etc., the topic resonated on a deeper personal level in hearing it speak back to me not only in my own voice, but also replicating my characteristic phrasing, speech cadence and context-appropriate expressive intonations.

Remarkably, that AI mimicry had only required recording my readings of a few minutes of brief text samples the program provided.

Sure, I wrote that previously unspoken text, just as it might have been something casually typed or verbally recorded about virtually anything else.

But it’s spooky to experience those words expressed here exactly as if spoken aloud, albeit often more perfectly phrased throughout without any empty thought pauses.

It immediately occurred to me that anyone could have articulated that text  or words written by anyone else  leading audio audiences familiar with a particular individual's voice and speaking pattern to believe it attributed to a live recording rather than a remarkably convincing cloned digital rendition.

In this instance, an occasion of briefly summarizing connected themes of my several (14) published books, AI had my full permission.

But what if I hadn’t solicited that assistance?

Could someone else using AI just as readily have secretly stolen my speaking clone from a telephone conversation, a bogus marketing solicitor or pollster for example, to non-consensually impersonate me (or you) for nefarious purposes?

It’s been done, and there are no insurmountable barriers preventing it from becoming more commonplace.

Cloning programs are readily available to anyone at very low cost.

Four of the services  ElevenLabs, Speechify, PlayHT and Lovo  simply require checking a box saying that the person whose voice is being copied had given authorization, then allowing the registrant to upload the audio of an individual speaking from a TikTok or YouTube for video distribution.

Of six services available to the public via websites, only two charge fees of $5 or less to create custom voice clones whereas the others are free.

There are reportedly few reliable ways to detect synthetic speech clone renditions from the real thing, with even deepfake detection programs subject to confusion since the technology lags behind cloning algorithm advancements.

What might personally go very wrong with being cloned?

One type of nightmare would be a so-called grandparent scam where a criminal makes a fake phone call impersonating a family member claiming to have been kidnapped, arrested or injured.

Well-known public figures and their works get scammed as well, such as when other people release their synthetic voices without permission as occurred with a streamed 2023 song falsely seeming to have originated by Drake and the Weeknd.

Last February, the Federal Communications Commission outlawed robocalls that contain non-authentic AI-generated voices used to scam people and mislead voters with prerecorded messages under terms of a 1991 Telephone Consumer Protection Act.

As Patrick Traynor, a University of Florida professor who specializes in computer science and telephone networks points out, "Machine learning is good at telling you about something it’s seen before [an individual's voice for example], but it’s not so good about reasoning about things it hasn't seen."

Generative AI is also making remarkable strides in mimicking, matching and automating visual appearance with verbal video impersonations.

Lindsay Gorman, who studies emerging technologies and disinformation at the German Marshall Fund's Alliance for Securing Democracy observes deepfake production progress in imitating realistically convincing eye movements which avoid tipoffs such as either too much staring or blinking.

Nevertheless, recognizing that such recently unfathomable AI capabilities to clandestinely impersonate us with illicit agendas, so does the amazing technology enable us to more efficiently and effectively project fuller, more nuanced aspects of our individual personas and thoughts to extended virtual media audiences than printed words alone.

Whereas it’s apparent that television and social media have resulted in far fewer people reading books, audio versions seem to have gained audiences who listen while driving or doing other quiet tasks.

I have accompanied a couple of my own books with audio copies as well, hiring a professional studio narrator with a voice far more expressively resonant than my own, but involving a process that required extra time and costs which more personal AI-translated versions can now avoid.

Similarly, I have begun using AI narration  sometimes my own voice  for audio versions of technical reports which can be conveniently posted with illustrations on professional websites and electronic media platforms including YouTube videos.

Even more, AI can translate and communicate that text into seeming limitless languages, extending that personal realm of thought and expression to global audiences.

As Barry Chudakov, principal at Sertain Research, projects, "The embedding of AI will be both a convenience and a point of contention as we enhance and entwine our lives with its hidden presence."

So yes, I've determined that lending my voice is perfectly OK with me so long as it doesn't change my meaning or allow algorithmic imitators to pretend to make up my mind.

Larry Bell is an endowed professor of space architecture at the University of Houston where he founded the Sasakawa International Center for Space Architecture and the graduate space architecture program. His latest of 12 books is "Architectures Beyond Boxes and Boundaries: My Life By Design" (2022). Read Larry Bell's Reports — More Here.

© 2026 Newsmax. All rights reserved.


LarryBell
Yes, I've determined that lending my voice is perfectly OK with me so long as it doesn't change my meaning or allow algorithmic imitators to pretend to make up my mind.
audio, deepfake, mimicry
900
2026-03-05
Monday, 05 January 2026 12:03 PM
Newsmax Media, Inc.

Sign up for Newsmax’s Daily Newsletter

Receive breaking news and original analysis - sent right to your inbox.

(Optional for Local News)
Privacy: We never share your email address.
Join the Newsmax Community
Read and Post Comments
Please review Community Guidelines before posting a comment.
 
TOP

Interest-Based Advertising | Do not sell or share my personal information

Newsmax, Moneynews, Newsmax Health, and Independent. American. are registered trademarks of Newsmax Media, Inc. Newsmax TV, and Newsmax World are trademarks of Newsmax Media, Inc.

NEWSMAX.COM
America's News Page
© Newsmax Media, Inc.
All Rights Reserved
Download the Newsmax App
NEWSMAX.COM
America's News Page
© Newsmax Media, Inc.
All Rights Reserved