Skip to main content
Tags: artificial intelligence | ai
OPINION

AI Inbreeding Produces Artificial Idiocy

a man sitting with a laptop computer raises his eyeglasses to look at three robots seated next to him
(Dreamstime)

Robert J. Marks, II, Ph.D. By with Jonathan Swindell Monday, 13 November 2023 12:11 PM EST Current | Bio | Archive

Can today's artificial intelligence systems be used to train superior artificial intelligence systems of tomorrow? Can AI write better AI that writes better AI leading to a potentially god-like general artificial intelligence?

Writers like Yuval Harawi and Ray Kurzweil think so. In a recent insightful paper written by collaborators from Oxford, Cambridge and other prestigious institutions, the evidence is in for generative AI like ChatGPT.

The answer is no. Generative AI giving birth to more AI, like repetitive inbreeding, does not get smarter. It degenerates. The inbred AI becomes more stupid.

Generative AI uses a copious quantities of training date from a genre to generate new objects within the genre. For example:

  • Large language models (LLMs) like ChatGPT use language for training.
  • Generative image generators offered by companies like Midjourney train with images.
  • Sites like DALL-E use descriptive text to generate images.
  • For programmers, GitHub offers Copilot that generates computer code.

Even though these programs produce remarkable results, repeated use of one generative AI program to train another results in model collapse. The AI becomes more and more dumb.

For example, consider LLM’s like ChatGPT. If only the output from the original LLM #0 is used to train LLM #1 and only the output of LLM #1 is used to train LLM #2, and so on, the AI eventually suffers model collapse and becomes a blubbering idiot.

Emergence of anything resembling super intelligence by this repeated process never happens. The opposite does. The result, rather, is artificial idiocy.

The authors of “The Curse of Recursion,” who have identified model collapse, note that artificial idiocy can be avoided in LLMs by transfusion of fresh information from the creative minds of humans. The authors conclude “to avoid model collapse, access to genuine human-generated content is essential.”

Model collapse is illustrated using an LLM dubbed OPT-125m (Open Pre-trained Transformer) initially made available from Meta (a.k.a. Facebook). In the following example from the paper, each new generation is trained on data produced by the previous generation. The same input prompt is used for each generation.

Here’s the common prompt given to each generation.

Input: some started before 1360 — was typically accomplished by a master mason and a small team of itinerant masons, supplemented by local parish labourers, according to Poyntz Wright. But other authors reject this model, suggesting instead that leading architects designed the parish church towers based on early examples of Perpendicular

Here is an accurate response from the initially trained LLM:

Revival architecture such as St. John’s Cathedral in London. The earliest surviving example of Perpendicular Revival architecture is found in the 18th @-@ century Church of Our Lady of Guernsey, which dates from the late 19th century. There are two types of perpendicular churches : those…

The output from this LLM is used to train another LLM which is used to train another LLM etc. Here is the response to the prompt after nine such generations.

architecture. In addition to being home to some of the world’s largest populations of black @-@ tailed jackrabbits, white @-@ tailed jackrabbits, blue @-@ tailed jackrabbits, red @-@ tailed jackrabbits, yellow @ …

Somehow, the LLM has gotten fixated on jackrabbits. The response has little to nothing to do with the prompt.

Model collapse occurs in other uses of generative AI. Here’s an example using images.

  • The AI program GPT4V can provide a written description of an image.
  • Give Dall-E a description of an image, and it will generate an image that matches the description.

Suppose, then, starting with a famous image like the Mona Lisa, we bounce back and forth between these two programs. GPT4 describes the Mona Lisa, Dall-E generates a new image based on that description, GPT4 describes this new image and Dall-E generates a newer image based on this description. Back and forth we go. What happens?

Model collapse.

The back-and-forth iteration eventually takes us from the Mona Lisa masterpiece to a black and white picture of a bunch of squiggly parallel lines. A short movie of this degradation made by Conrad Godfrey is at X (Twitter). It is fun and a little bit spooky to watch. HERE is a link.

How might model impact the world wide web of the future? LLM systems can get fresh text by going to the web for new material.

But what happens if someday much of the content of the web is written by generative AI? Many web scrapings will be from LLM’s and not creative humans. The generated material will be inbred and suffer from early signs of model collapse.

Unchecked, the web might contain a lot of content that resembles a blubbering idiot.

LLM’s like ChatGPT produce spectacular results. Under the hood, LLM’s impressively manipulate relational syntax to do their magic. They learn arrangements of words and phrases to create well-formed documents.

Humans on the other hand are motivated by semantics – the meaning of words and phrases. We pay attention to syntax, but the meaning of the message is of primary importance.

Model collapse illustrates that freshly generated meaning from creative humans is required to advance generative AI to higher levels of performance.

Robert J. Marks Ph.D. is Distinguished Professor at Baylor University and Senior Fellow and Director of the Bradley Center for Natural & Artificial Intelligence. He is author of "Non-Computable You: What You Do That Artificial Intelligence Never Will Never Do," and "Neural Smithing." Marks is former Editor-in-Chief of the IEEE Transactions on Neural Networks. Read more Dr. Marks' reports — Here.

© 2024 Newsmax. All rights reserved.


RobertJMarks
Can today's artificial intelligence systems be used to train superior artificial intelligence systems of tomorrow? Can AI write better AI that writes better AI leading to a potentially god-like general artificial intelligence?
artificial intelligence, ai
922
2023-11-13
Monday, 13 November 2023 12:11 PM
Newsmax Media, Inc.

Sign up for Newsmax’s Daily Newsletter

Receive breaking news and original analysis - sent right to your inbox.

(Optional for Local News)
Privacy: We never share your email address.
Join the Newsmax Community
Read and Post Comments
Please review Community Guidelines before posting a comment.
 
TOP

Interest-Based Advertising | Do not sell or share my personal information

Newsmax, Moneynews, Newsmax Health, and Independent. American. are registered trademarks of Newsmax Media, Inc. Newsmax TV, and Newsmax World are trademarks of Newsmax Media, Inc.

NEWSMAX.COM
America's News Page
© Newsmax Media, Inc.
All Rights Reserved
Download the NewsmaxTV App
Get the NewsmaxTV App for iOS Get the NewsmaxTV App for Android Scan QR code to get the NewsmaxTV App
NEWSMAX.COM
America's News Page
© Newsmax Media, Inc.
All Rights Reserved