Skip to main content
Tags: ai | lie | flaw | hallucinations
OPINION

If AI Can Replicate Our Words, It Can Also Lie

visitor watches an artificial intelligence sign on an animated screen
(Josep Lago/AFP via Getty Images)

James Hirsen By Wednesday, 10 May 2023 04:27 PM EDT Current | Bio | Archive

Plenty of discussions have been taking place about the dangers surrounding Artificial Intelligence (AI) and its existing application, the positives and negatives, and possible misuses and/or abuses.

However, a problem has popped up that seems to be causing a real stir.

It turns out that AI can actually lie.

Tech experts refer to inaccuracies and falsehoods produced by AI as "hallucinations."

This term is typically used to describe incidents whereby AI provides solutions to problems; however, the solutions contain fictitious material that was not part of the original training data used during the programming process.

Tech experts don't actually understand AI's hallucination phenomenon.

When AI first became available in the form of so-called large language models (LLMs), aka, chatbots, AI hallucinations just surfaced on their own.

Early users of LLMs noticed that hallucinations seemed to "sociopathically" embed plausible sounding fabrications in the generated content.

A number of experts have used the words "very impressive-sounding answer that's just dead wrong" to describe an AI hallucination.

An early example of the phenom happened in August 2022.

Facebook's owner Meta warned that its newly released LLM, BlenderBot 3, was prone to hallucinations, which Meta described as "confident statements that are not true."

In November 2022, Meta unveiled a demo of another LLM, Galactica, which also came with the following warning: "Outputs may be unreliable! Language Models are prone to hallucinate text."

Within days Meta withdrew Galactica.

December 2022 saw the release to the public of OpenAI's LLM, ChatGPT, in its beta-version. This is the AI that is most widely used and one with which the public has the greatest familiarity.

Wharton Associate Professor Ethan Mollick seemed to humanize ChatGPT when he compared the LLM to an "omniscient, eager-to-please intern who sometimes lies to you."

Lies were exactly what were generated when the Fast Company website attempted to use ChatGPT to author a news piece on Tesla. In writing the article, ChatGPT just went ahead and made up fake financial data.

When CNBC asked ChatGPT for the lyrics to a song called "Ballad of Dwight Fry," instead of supplying the actual lyrics the AI bot provided its own hallucinated ones.

A top Google executive recently stated that reducing AI hallucinations is a central task for Bard, Google's competitor to ChatGPT.

Senior Vice President of Google Prabhakar Raghavan described an AI hallucination as occurring when the technology "expresses itself in such a way that a machine provides a convincing but completely made-up answer."

The executive stressed that one of the fundamental tasks of Google's AI project is to keep the hallucination phenom to a minimum.

In fact, when Google's parent company Alphabet Inc. first introduced Bard, the software shared inaccurate information in a promotional video. The gaffe cost the company $100 billion in market value.

In a recent "60 Minutes" interview, Google CEO Sundar Pichai acknowledged that AI hallucinations remain a mystery.

"No one in the field has yet solved the hallucination problems," Pichai said.

Admitting that the phenomenon is very widespread in the AI world, he stated, "All models do have this as an issue."

When the subject of the potential spread of disinformation was brought up, Pichai said, "AI will challenge that in a deeper way. The scale of this problem will be much bigger."

He noted that there are even additional problems with combinations of false text, images, and even "deep fake" videos, warning that "on a societal scale, you know, it can cause a lot of harm."

Twitter and Tesla owner Elon Musk recently alluded to the potential harm that AI poses to the political process.

In an appearance on Tucker Carlson's prior Fox show, Musk said, "If a technology is inadvertently or intentionally misrepresenting certain viewpoints, that presents a potential opportunity to mislead users about actual facts about events, positions of individuals, or their reputations more broadly speaking." 

Musk then gave his perspective, taking into account the intellectual prowess of AI.

He asked, "If AI's smart enough, are they using the tool or is the tool using them?"

The answer is yes.

James Hirsen, J.D., M.A., in media psychology, is a New York Times best-selling author, media analyst and law professor. Visit Newsmax TV Hollywood. Read James Hirsen's Reports — More Here.

© 2024 Newsmax. All rights reserved.


JamesHirsen
Tech experts refer to inaccuracies and falsehoods produced by AI as "hallucinations."
ai, lie, flaw, hallucinations
704
2023-27-10
Wednesday, 10 May 2023 04:27 PM
Newsmax Media, Inc.

Sign up for Newsmax’s Daily Newsletter

Receive breaking news and original analysis - sent right to your inbox.

(Optional for Local News)
Privacy: We never share your email address.
Join the Newsmax Community
Read and Post Comments
Please review Community Guidelines before posting a comment.
 
TOP

Interest-Based Advertising | Do not sell or share my personal information

Newsmax, Moneynews, Newsmax Health, and Independent. American. are registered trademarks of Newsmax Media, Inc. Newsmax TV, and Newsmax World are trademarks of Newsmax Media, Inc.

NEWSMAX.COM
America's News Page
© Newsmax Media, Inc.
All Rights Reserved
Download the NewsmaxTV App
Get the NewsmaxTV App for iOS Get the NewsmaxTV App for Android Scan QR code to get the NewsmaxTV App
NEWSMAX.COM
America's News Page
© Newsmax Media, Inc.
All Rights Reserved