A recent ruling by the U.S. Supreme Court may become a millstone around the neck of Big AI software like ChatGPT, Bard, and DALL-E.
The ruling strengthens copyright violation lawsuits by those whose intellectual property was used to train the AI.
Generative AI learns from training data how to generate new and unique outputs.
Think of the training examples as being sparsely populating a silo.
Generative AI begins to populate the silo with things close to and resembling the examples.
A great illustration is on the website: this-person-does-not-exist.com.
Keep hitting refresh and you see deepfake faces learned by the generative AI, trained on faces of thousands of real people.
It’s a little spooky.
As the website name says, these people do not exist.
Lawyers defending generative AI say they are using their training material under U.S. fair use provisions of the U.S. Copyright act. But a recent high court ruling is not good news for AI plagiarism. Fair use requires borrowed copyrights must be "transformative."
The case involved photoshop versions of pop-artist Prince crafted by the Andy Warhol Foundation taken from a photo copyrighted by Lynn Goldsmith.
Was this so-called "fair use" of copyrighted material under U.S. law?
The Supreme Court says no.
The SCOTUS generated Questions Presented Report for Warhol v. Goldsmith begins:
"This Court has repeatedly made clear that a work of art is 'transformative' for purposes of fair use under the Copyright Act if it conveys a different 'meaning or message' from its source material."
This applies to AI.
For example, assume AI is trained with all of the musical compositions of Bach.
If the AI generates music that sounds like Bach, it is not transformative.
The "meaning or message" can be construed as being the same.
It’s still like Bach.
On the other hand, if the AI is trained only on Bach but generates music that sounds Wagnerian, it may be transformative.
But AI trained on Bach can never generate music in the style of Wagner.
AI can interpolate within the silo of its training but will never be creative by venturing outside of its silo. Generative AI cannot be transformative.
In other words, generative AI cannot think outside of the box.
The SCOTUS Report continues with a potentially severe blow to generative AI.
"… the [Second Circuit] court [previously] concluded that even where a new work indisputably conveys a distinct meaning or message, the work is not transformative if it 'recognizably deriv[es] from, and retain[s] the essential elements of its source material.'"
In a 7-2 ruling, the SCOTUS upheld this position.
AI was not an issue in the high court case.
But if AI "recognizably derives from, and retains the essential elements of its source material," U.S. copyright law has been violated.
Do the deepfakes generated at this-person-does-not-exist.com apply to the SCOTUS ruling? The deepfake faces inarguably look like human beings.
So do all of the pictures used to train the AI.
Pictures of soup cans, flowers, and toe fungus were not used to train the AI.
Undeniably, every generated deepfake face "recognizably derives from, and retains the essential elements of its source material,"
This SCOTUS ruling is good news for those already suing Big AI. For example:
- Getty Images is notorious for protecting its collection of copyrighted images. If you use a Getty image on your website, be prepared to either pay voluntarily or in court.
- Getty is suing Stability AI that, like this-person-does-not-exist.com, generates AI images based on generative AI trained with images. In the lawsuit, Getty claims "Stability AI has copied more than 12 million photographs from Getty Images’ collection, along with the associated captions and metadata, without permission from or compensation to Getty Images."
- Generative AI can produce computer code when trained with computer code written by people. Late last year, a lawsuit was filed against "GitHub Copilot, its parent, Microsoft, and its AI-technology partner, OpenAI." The suit alleges: "This case represents the first major step in the battle against intellectual-property violations in the tech industry arising from artificial intelligence systems."
Bolstered by Warhol v. Goldsmith, exposing the sources of training data would open Big AI to more copyright litigation.
But source material used by AI is often not revealed.
OpenAI, home of ChatGPT, GPT-4, and DALL-E, is secretive about its data sources.
Should AI be legally forced to reveal its sources?
To make AI more accountable, this is one of the suggestions made at the recent congressional hearing on AI and is under consideration in European AI regulation legislation.
Unlike U.S. reporters protected by the first amendment’s freedom of the press, AI has no constitutional or legal place to hide if the law says it can look behind the curtain.
Noam Chomsky called AI that trains on volumes of material "hi-tech plagiarism."
Studies show that generative AI like ChatGPT does indeed plagiarize content.
How this plays out in court may have a significant effect on the future of the business of Big AI.
Robert J. Marks Ph.D. is Distinguished Professor at Baylor University and Senior Fellow and Director of the Bradley Center for Natural & Artificial Intelligence. He is author of "Non-Computable You: What You Do That Artificial Intelligence Never Will Never Do," and "Neural Smithing." Marks is former Editor-in-Chief of the IEEE Transactions on Neural Networks. Read more Dr. Marks' reports — Here.
© 2024 Newsmax. All rights reserved.