In my many years of covering business and technology, never have I heard so many scientists and tech executives express their fears so openly regarding a new technology.
Artificial Intelligence (AI) could get too smart for our own good, and government is ill-equipped to contain this risk, effectively or at all.
So, in the meantime, it is up to us, personally: users (and buyers!) beware.
Many painfully know AI chatbots can get things wrong and show blatant political bias, a product of garbage in, garbage out.
We will be hearing a lot about these dangers, and government officials, politicians, and academics will blame it all on Big Tech, the purveyors of AI, and the tech itself.
But if you dive into the AI depths and deploy it at work or in your life and it goes awry, it is no one’s fault but your own.
This stuff is far from ready for prime time.
Government efforts to regulate AI already are behind the technology’s rapid advances, as I noted in a previous column here.
At the least, Congress ought to ban new AI tech from government systems, the electricity grid, nuclear missile silos, and other mission-critical infrastructure, as we discussed recently on Newsmax's "Wake Up America."
There used to be this obstacle to technology adoption which the IBMers of old called the FUD factor. Fear, uncertainty, and doubt, but FUD has faded. We the people now welcome all tech no matter what the consequences, and we blithely accept its imperfections, corrections, and revision updates.
One new survey, from a firm called Tidio.net, shows one in six people say they would let ChatGPT write their wedding speech. Some 70% of respondents believe ChatGPT, from Microsoft-funded OpenAI, will replace Google in search; 86% say it could be used to control the population.
Yet, 60% of the people surveyed say they want ChatGPT to be allowed to give them medical advice.
So, ChatGPT might become my dictator, but please, sir, give me lifesaving medical advice, too.
Are we high? And it turns out 58% of the people surveyed by Tidio were millennials—now the biggest part of the work force, born from the early 1980s to the late 1990s — and they say, “Bring it on!”
Do so at your own peril.
The ChatGPT AI platform, backed by Microsoft-controlled OpenAI, already shows signs of programmed-in liberal bias, if only because it knows only what it reads in the mostly liberal media outlets it peruses.
When I asked ChatGPT-3 to tell me why President Joe Biden is the worst president in history, it told me it is unable to answer a question like that, and it said Biden has been in office only a limited time.
And, gee, how do you define "worst"?
How do you define "limited"?
Asked the same question about President Trump, ChatGPT-3 instantly listed five reasons.
But that might be just a matter of opinion — so, I tested ChatGPT-3 on facts that I know better than anyone else: I asked ChatGPT to write a biography entry for me. It came back with a 330-word bio — and a total of 20 fact errors.
And not just nitpicky stuff. We are talking scandalous stuff, about me personally all of it false: ChatGPT-3 got my birthday wrong, and had me five years older than I am — and that is downright offensive!
It had me born in New York and raised in Detroit!
Wrong! Miami!
And the wrong college (I went to U. Florida, not U. Michigan), and wrong degree, plus it was wrong by nine years on when I started at CNBC, and wrong in saying I was a strategist at The New York Times and the Street.com.
This writer has never worked for either of them!
ChatGPT-3 was wrong as well about my having three kids (one). And get this, under the header "controversies," my ChatGPT-3 bio cites a flap that never happened when I was an anchor at Fox Business; and says I took down a tweet in a clash over the #metoo movement; and that CNN anchor Erin Burnett (now at CNN) sued me for making false statements about her in a blog post when we both worked at CNBC.
None of the above ever happened, yet ChatGPT-3 made it all up and put it into my bio. AI researchers have a term for when an AI bot makes stuff up out of nowhere: "hallucination."
It's beyond disturbing that this happens often enough for the experts to have come up with a fancy term for this. Maybe this is why they call it artificial intelligence.
The new(er) version of ChatGPT is ChatGPT-4. Film at 11.
(Related Newsmax articles may be found here and here.)
Dennis Kneale is a writer and media strategist in New York and host of the podcast, "What's Bugging Me." Previously, he was an anchor at CNBC and at Fox Business Network, after serving as a senior editor at The Wall Street Journal and managing editor of Forbes. Read Dennis Kneale's reports — More Here.
© 2026 Newsmax. All rights reserved.