
OpenAI just killed Sora.
That’s an amazing development. When the company rolled out the video-creation site, and later the app, reviewers called it a trailblazer because it combined video creations with sound effects, spoken dialog, and the ability for users to generate a specific character using a reference image and reuse them in multiple videos (a Sora 2 feature called “Character”).
Sora was seen as a threat to jobs in filmmaking and marketing. After watching an early demo of Sora, actor and filmmaker Tyler Perry canceled the construction on an expansion to his film studio in Atlanta, Georgia.
Hollywood’s powerful CAA talent agency in October issued a statement saying that Sora was a threat to the livelihood of actors.
The Los Angeles Times said Sora represented a “firestorm” in the movie industry.
And critics pointed out how easily and convincingly Sora could create realistic videos involving actual people. (OpenAI recently banned users from uploading pictures of real human faces.)
So if Sora was so disruptive, threatening and money-saving, why did OpenAI shut it down?
The company even tore up a $1 billion deal with Disney, signed in December, which would have brought 200 Disney characters into the Sora video app.
Though OpenAI claimed on Tuesday that it killed Sora to focus on robotics, critics and skeptics argue that the real reasons include high computing costs, shrinking user numbers, legal threats over copyright, and a strategic shift ahead of an expected fourth-quarter 2026 IPO.
In other words, AI-generated video isn’t worth it, isn’t a priority, and isn’t the technology miracle everybody thought it was.
Still, that’s a long distance from the hype and fear of two years ago. What happened?
The backlash
With each passing day, it becomes clearer that the initial reaction to ever-improving AI generation tools was more of a parlor trick and novelty, and that the novelty is wearing off.
When gamers reacted negatively to Nvidia’s generative AI graphics enhancement tool called DLSS 5, which launches this fall, fearing that it replaces human artists, Nvidia CEO Jensen Huang said, “I don’t love AI slop myself,” but also said that critics misunderstand what the tool is all about.
A TikTok account got millions of likes and followers for so-called “fruit slop” videos about a reality show called “Fruit Love Island” starring “fruit people,” but the backlash against the account came even faster than its rise. Critics say the popularity of the videos represents a society gone mad and a generational crisis for young people. The account is now gone, replaced by hundreds of copycat accounts.
Parents are also concerned that low-quality AI slop is infiltrating children’s content on sites like YouTube. These videos seem mass-produced with little care about content. Many of these videos show the kind of nonsense we’ve grown to expect from AI slop. From an article on Undark:
“The video opens with the children riding, without seatbelts, in the front row of a moving vehicle. The next scene shows the girl defying physics, floating alongside a moving car, while the boy is seated in what appears to be the hood of the vehicle as it travels backward down a busy street. The third and fourth scenes show the children walking in the middle of the road with moving cars behind them.”
But these stories don’t quite identify the larger problem, which is that a huge number of people and businesses think that AI slop is some kind of secret weapon to represent their ideas and brands in the real world.
Small shops and restaurants in New York City, for example, are using AI pictures to advertise food. The food tends to look nothing like the actual food, creating a persistent, low-level false advertising problem among small businesses.
AI slop is used for all kinds of purposes on social media. One of my least favorite uses is to depict historic information. It’s always misleading. For example, I’m a big fan of Mexican history, and I’ve seen multiple AI slop videos showing, say, the island city of Tenochtitlan (the Aztec city that existed in the place where Mexico City now sits). Designed to enable people to visualize the past, they always create a completely false impression of that past.
AI slop: it’s just cheap
AI slop, which is appealing to some users because it’s cheap, and opposed by other people because it’s cheap, is causing unexpected problems.
Brands like J.Crew and Coca-Cola have seen their reputations tarnished when they tried to use AI-generated pictures or videos for marketing actual products.
It’s also causing resentment because one of its main uses is to emotionally manipulate people. The Chinese version of TikTok, for example, is awash with “regret videos,” whereby aging parents with unmarried offspring use AI to try to frighten their kids into marriage.
The shaming videos use AI to “age” women, showing them bitter and alone, often in hospitals, juxtaposed to happy women with their families. Some have dialog, such as: “I regret it. My parents told me to get married and have kids. I didn’t listen, thinking it was too much trouble. Look at me now!,” according to one report.
AI-generated imagery is often used explicitly to evoke in others the emotions you want others to feel: fear, revulsion, outrage, and pity.
AI-generated slop often seeks to bypass reason and get straight at emotions. It replaces words with pictures. Often, the pictures make no sense, and that hardly matters. The aim is emotion. And while some believe emotional manipulation through AI is something they want, a larger number resent being manipulated.
And the world of content is taking a stand as well. In January, San Diego’s Comic-Con banned all AI comics. Last week, the digital comic platform GlobalComix removed AI content. Marvel Comics Editor-in-Chief C.B. Cebulski established a strict anti-AI policy during the New York Comic Con in late 2025.
Book publishers are resisting as well. Hachette Book Group canceled the upcoming release of a horror novel titled “Shy Girl” by Mia Ballard because they suspected the author used AI to write it.
By early 2026, many public libraries had begun writing strict collection policies to filter out books with AI-generated content.
In early 2026, Instagram updated its system to actively penalize highly polished, synthetic content.
So what we’re witnessing is a massive backlash to AI slop, especially visual content like videos and pictures. And the backlash is so great, it even killed Sora.
AI disclosure: I don’t use AI to do my writing. The words you see here are mine. I do use a variety of AI tools via Kagi Assistant (disclosure: my son works at Kagi) — backed up by both Kagi Search, Google Search, as well as phone calls to research and fact-check. I used a word processing application called Lex, which has AI tools, and after writing the column, I used Lex’s grammar checking tools to hunt for typos and errors and suggest word changes. Here’s why I disclose my AI use.
