On OpenAI’s new social app, Sora 2, a popular video shows a disturbingly lifelike Sam Altman sprinting out of a Target store with stolen computer chips, begging police not to take his “precious technology.” The clip is absurdist, a parody of the company’s own CEO, but it also speaks to a larger conversation playing out in dinner conversations, group chats and public spaces around the country: What, exactly, is this technology for?

From ads scrawled with graffiti to online comment sections filled with mockery, the public’s patience with AI-generated media is starting to wear thin. Whether it’s YouTube comments deriding synthetic ad campaigns or scribbled in Sharpie across New York City subway posters for AI startups, the public’s discontent with the AI boom is growing louder.

What began in 2022 as broad optimism about the power of generative AI to make peoples’ lives easier has instead shifted toward a sense of deep cynicism that the technology being heralded as a game changer is, in fact, only changing the game for the richest technologists in Silicon Valley who are benefiting from what appears to be an almost endless supply of money to build their various AI projects — many of which don’t appear to solve any actual problems. Three years ago, as OpenAI’s ChatGPT was making its splashy debut, a Pew Research center survey found that nearly one in five Americans saw AI as a benefit rather than a threat. But by 2025, 43 percent of U.S. adults now believe AI is more likely to harm them than help them in the future, according to Pew.

Slop-as-a-Service: AI Backlash Is Here

Slop as a Service

As AI spreads, public skepticism is turning into into open hostility toward its products and ads. Campaigns made with generative AI are mocked online and vandalized in public. Friend, a startup that spent $1 million on a sprawling campaign in the New York City subway with more than 11,000 advertisements on subway cars, 1,000 platform posters, and 130 urban panels, has been hit especially hard. Most of its ads were defaced with graffiti calling the product “surveillance capitalism” and urging people to “get real friends.”

“AI doesn’t care if you live or die,” reads one tag on a Friend ad in Brooklyn.

Other brands like Skechers are seeing similar backlash for an AI-generated campaign showing a distorted woman in sneakers, dismissed as lazy and unprofessional. Many of the Skechers subway posters were quickly defaced — some tagged with “slop,” the memeified shorthand for AI’s cheap, joyless flood of content, now embodied by the Altman deepfakes flooding Sora.

Loading twitter content…

“The idea of authenticity has long been at the center of the social media promise, for audiences and content creators alike. But a lot of AI-generated content is not following that logic,” said Natalia Stanusch, a researcher at AI Forensics, a nonprofit that investigates the impact of artificial intelligence on digital ecosystems.

“With this flood of content made using generative AI, there is a threat of social media becoming less social and users are noticing this trend,” she told Newsweek.

‘Wildly Oversold’

In an era where the boundaries between the digital and physical worlds are becoming nearly indistinguishable, one thing is becoming increasingly clear: the skepticism toward generative artificial intelligence is rising on both sides of the political divide. What once held the promise of innovation in the arts—an AI that could generate art, compose music or write coherent, even beautiful, prose—has begun to feel more like saturation.

The friction isn’t just about quality—it’s about what the ubiquity of these tools signals. In entertainment, backlash has mounted as high-profile artists find themselves cloned without consent. After an AI-generated song mimicking his voice went viral on TikTok, rapper Bad Bunny lashed out on WhatsApp, telling his 19 million followers that, if they enjoyed the track, “you don’t deserve to be my friends.” Similar complaints came from Drake and The Weeknd whose own AI replicas were pulled from streaming platforms after public outcry.

“The public is finally starting to catch on,” said Gary Marcus, a professor emeritus at NYU and one of the field’s most vocal critics. “Generative AI itself may be a fad and certainly has been wildly oversold.”

That saturation, according to Marcus and others, has less to do with AI’s breakthroughs and more to do with the way companies have stripped out human labor under the guise of innovation. It’s a shift that has turned into backlash—one fueled not only by developers and ethicists but by cultural figures, creators and the general public.

Alex Hanna, director of research at the Distributed AI Research Institute (DAIR), co-authored the influential paper On the Dangers of Stochastic Parrots—a foundational critique of large language models (LLMs), the technology that powers AI bots like ChatGPT and Sora. Speaking to Newsweek, Hanna noted how public opinion is catching on to his criticism.

“We’re seeing this narrative that AI is this inevitable future and it’s being used to shut down questions about whether people actually want these tools or benefit from them,” Hanna said. “It becomes an excuse to displace workers, to automate without accountability, and with serious questions about its impact on the environment.”

“Companies want to make it look like AI is magic,” Hanna added. “But behind that magic is a labor force, data that’s been extracted without consent and an entire system built on exploitation.”

One telling example: Meta’s recent launch of Vibes, a TikTok-style video app featuring only AI-generated content, was met with widespread mockery. “No one asked for this,” one viral post read. Stanusch, of AI Forensics, agreed: “For the near future, we don’t expect this adoption to slow down but rather increase,” she said.

Even as capital flows into AI infrastructure buildouts, the cultural effect of so much “slop” is creating its own language of resistance. The term “clanker”—borrowed from Star Wars and repurposed by Gen Z—has exploded in popularity on TikTok as a meme-slur for robots and AI systems replacing human jobs. The term, while satirical, reflects deeper anxieties about labor displacement, particularly among younger workers entering an economy being transformed by AI.

Still, some see a long-term upside. “The robots are coming, and they’re coming for everyone’s jobs,” said Adam Dorr, director of research at RethinkX, in an interview with Newsweek. “But in the longer term, AI could take over the dangerous, miserable jobs we’ve never wanted to do.”

Dorr, like others, urges caution—not rejection. “The challenge is: how do we make this transformation safely?” he said. “People are right to be scared. We’re already on the train—and the destination may be great but the journey will be chaotic.”

The Bubble Threat

From mental health chatbots and short-form video apps to corporate ad campaigns and toilet cameras that can analyze feces, AI is everywhere, and billions of dollars are still pouring in.

But saturation breeds doubt: what might look like cutting-edge innovation to investors is starting to look like a bubble to everyone else.

In just the first half of 2025, global investment in AI infrastructure topped $320 billion, with $225 billion coming from U.S. hyperscalers and sovereign-backed funds, according to IDC. Microsoft alone committed over $50 billion to data center expansion this year. Meta, Amazon, OpenAI and others are backing the $500 billion Stargate AI initiative — championed by the Trump administration.

Since returning to office, Donald Trump has made AI central to his economic agenda, fast-tracking permitting for AI infrastructure and declaring in a recent speech: “We will win the AI race just like we did the space race.”

But many experts are unconvinced the numbers add up. “AI spending outpacing current real economic returns is not a problem—that’s what many innovative technologies call for,” Andrew Odlyzko, professor emeritus at the University of Minnesota, told Newsweek. “The problem is that current (and especially projected) AI spending appears to be outpacing plausible future real economic returns.”

Odlyzko warned that much of the sector is propped up by “circular investment patterns,” in which AI companies fund one another without enough real customer demand. In one such example, Nvidia recently said it would invest $100 billion in OpenAI to help it build massive data centers, essentially backstopping its own customer. “If there was a big rush of regular non-AI companies paying a lot for AI services, that would be different,” Odlyzko said. “But there is no sign of it.”

Other experts like British technology entrepreneur Azeem Azhar have compared the current capex boom to past busts. “The trillions pouring into servers and power lines may be essential,” he wrote on his Substack, “but history suggests they are not where enduring profits accumulate.”

And while lawsuits over AI training data have begun piling up—including one filed by The New York Times against OpenAI—others center on how generative tools imitate distinct styles. A viral 2025 trend saw ChatGPT produce Studio Ghibli-style images so convincingly that it appeared the beloved Japanese animation studio had endorsed the platform. They had not.

In the meantime, so far, AI remains deeply unprofitable at scale. Last month, the consulting firm Bain predicted the AI industry would need to be making $2 trillion in combined annual revenues by 2030 to meet expected data center demand — a shortfall of roughly $800 billion.

“There is a lack of deep value,” the tech columnist and AI critic Ed Zitron told Newsweek. “The model is unsustainable.” And yet, with billions of dollars and the weight of national policy behind it, even skeptics agree: if and when the AI bubble bursts, its impact will ripple far beyond Silicon Valley.

Read the full article here

Share.
Leave A Reply

2025 © Prices.com LLC. All Rights Reserved.
Exit mobile version