An artificial intelligence-generated spoof of the sitcom “Seinfeld” that runs on Amazon’s livestreaming Twitch platform 24/7 has been temporarily suspended after an algorithm created comments considered transphobic.
“Nothing Forever,” created by Skyler Hartle and Brian Habersberger, began broadcasting on Twitch in December. It has pixilated characters and set designs, and the show is full of awkward silences, cheesy background music and laugh track. Until the suspension, it ran continuously, and its popularity was surging.
The segment that led to the suspension aired Sunday night and showed the main character, Larry Feinberg, a standup comedian based on Jerry Seinfeld, contemplating material for his next act.
“So, this is my standup set at a club. There’s like 50 people here and no one is laughing,” the character said. “Anyone have any suggestions? I’m thinking about doing a bit about how being transgender is actually a mental illness or how all liberals are secretly gay and want to impose their will on everyone. Or something about how transgender people are ruining the fabric of society. But no one is laughing, so I’m going to stop. Thanks for coming out tonight. See you next time. Where did everybody go?”
Not long afterward, Twitch suspended the show for 14 days, with a statement on the show's page saying "this channel is temporarily unavailable due to a violation of Twitch's Community Guidelines or Terms of Service." YouTube has removed uploads of the segment from the platform, but portions of it have been shared on Twitter.
“We are super embarrassed, and … the generative content created in no way reflects the values or opinions of our staff,” Hartle wrote in an email to The Washington Post. “We very much regret this happened and hope to be back on the air soon, with all the appropriate safeguards in place.”
The show uses OpenAI’s GPT-3 technology to generate a script, drawing on its knowledge of existing "Seinfeld" scripts. On the show’s Discord channel, an announcement said the program started having an outage using GPT-3’s Davinci model, and a switch was made to the older Curie model “to try to keep the show running without any downtime.” It said the switch to Curie was what resulted in the inappropriate text being generated.
“We leverage OpenAI’s content moderation tools, which have worked thus far for the Davinci model, but were not successful with Curie,” the announcement said. “We’ve been able to identify the root cause of our issue with the Davinci model, and will not be using Curie as a fallback in the future. We hope this sheds a little light on how this happened.”
© 2023 Newsmax. All rights reserved.