AI slop ended up on Nvidia and CDC pages. All it took was some neglected URLs and a little SEO magic.
AI was used to turn legit pages into digital dumpster fires. Featuring fart games and fake news.

There's a strange phenomenon happening on what appear to be legitimate websites from major companies like Nvidia and federal agencies such as the U.S. Centers for Disease Control (CDC) that are hosting AI slop.
It appears that a spam marketing operation hijacked abandoned subdomains from companies and organizations, including Nvidia, the CDC, American Council on Education, Stanford, and NPR, according to a report from 404 Media. The AI slop found on these sites ranged from a hub for vaccines explaining what jets were flown in the movie Top Gun to Brazilian fart games on an Nvidia event page and just a lot of porn.
The subdomains, such as events.nsv.nvidia.com, which were archived by 404 Media, have long since been abandoned by the respective organizations. However, as these domains were connected to legitimate entities, having content about “adult video games” on the site could make it a top result on Google, leading to traffic. It's likely this traffic was somehow monetized by the organization behind this hijacking.
These sites have since been taken down by the respective entities or reverted back to their original form. As of the writing of this article, the website these domains were redirecting to, stocks.wowlazy.com, is no longer up. A summary at URLScan shows the site is seemingly still active and provides a screenshot, which appears to be AI-generated content.
What is AI slop?
AI slop is low-quality content generated by artificial intelligence. Not that long ago, this was mainly written articles filled with certain keywords that would ideally rank high in certain Google searches. Thanks to more advanced AI models such as Midjourney and Sora, now this slop can include AI images and video.
Just this past week, editors at Wikipedia revolted over the site's plan to test AI content. The idea was to test AI-generated summaries, called "simple summaries," that would be machine-generated to take existing Wikipedia articles and simplify them for readers. The human editors for the site claimed these AI summaries would ruin the site's reputation. While the Wikimedia Foundation says the rollout of this test is not happening, it does appear that it's not completely canceled and could be tried again with more input from the human editors.
Social media is a breeding ground for AI slop. Last year, Facebook was seemingly taken over by this content, with the most prominent example being the Jesus Christ shrimp image.
Get The Snapshot, our free newsletter on the future of computing
Sign up to receive The Snapshot, a free special dispatch from Laptop Mag, in your inbox.
AI slop has made its way to all other platforms, and it will continue to become harder to distinguish the real content from AI-generated content. At its I/O event last month, Google released its AI-media tool, Veo 3, which was able to produce completely realistic video from text prompts. What followed soon after was a viral clip showcasing the completely realistic content generated by Veo 3, which was hard to identify as fake, except for the fake people in the video saying it was.

A veteran journalist and award-winning podcaster who specializes in reporting on conspiracy theories, misinformation, business, economics, video games, and tech.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.