A meteor slams into the moon in crystal-clear iPhone quality. The impact splashes across the lunar surface like a stone hitting water. The video has 3 million views, 400,000 shares, and a comment section full of people tagging friends with "Did you see this?!" None of it is real. The meteor didn't happen. The video was generated by AI. And Facebook's algorithm served it to millions of users anyway.
As Vice reported, these fabricated lunar impact videos have proliferated across Facebook in recent weeks, racking up engagement numbers that dwarf most legitimate science content. The videos follow a template: a massive moon hanging in the sky, a sudden meteor strike, dramatic impact effects that look like they were ripped from a PlayStation cutscene. They're shared by pages with generic names, boosted by Facebook's recommendation engine, and consumed by users who have long since stopped questioning what appears in their feeds.
This isn't a glitch. It's the system working exactly as designed. Facebook's algorithmic infrastructure doesn't distinguish between real and fake—it distinguishes between engaging and boring. A fabricated meteor video that generates millions of views is, by the platform's logic, more valuable than an actual NASA image that gets 10,000. The algorithm doesn't care about truth. It cares about time spent on platform. And AI-generated spectacle delivers that metric more efficiently than reality ever could.
The meteors are just the latest iteration of a content category that has metastasized across Facebook over the past two years: AI slop optimized purely for engagement. Fabricated images of animals, nonexistent historical events, invented celebrity moments, and now lunar impacts—all generated by tools that have become cheap, fast, and effective enough to flood the zone. The platforms have spent years training users to scroll through an endless feed of content without questioning its provenance. Now AI generators are exploiting that learned passivity at scale.
What makes the meteor videos particularly instructive is how little effort went into making them believable. The physics are wrong. The lighting is wrong. The scale is wrong. Anyone with even a passing familiarity with how space actually works would spot the fabrication immediately. But the videos aren't designed for people who know better—they're designed for the algorithmic feed, where skepticism is a friction point and credulity is the default mode. The goal isn't to fool experts. It's to generate enough engagement from everyone else that the algorithm keeps serving it up.
This is the same dynamic that has turned TikTok into a dumping ground for AI slop and transformed Instagram into a carousel of fabricated inspirational quotes superimposed over stock photos. The platforms have built recommendation systems that treat all content as equivalent inputs in an engagement optimization problem. A real meteor captured by a legitimate astronomer and a fake meteor generated by someone running an engagement farm in Southeast Asia are functionally identical to the algorithm—both are just data points competing for user attention.
The business model is straightforward. Pages generate AI content at near-zero cost, accumulate massive follower counts through viral fabrications, then monetize that attention through Facebook's ad revenue sharing or by selling the page to someone running a different scam. It's arbitrage: the gap between the cost of generating fake content and the value of the attention it captures. And Facebook has built the infrastructure that makes it profitable.
The broader pattern here is one Tinsel has tracked across multiple platforms: algorithmic design is no longer neutral infrastructure. It's an editorial force that actively shapes what gets seen, what gets believed, and what gets monetized. When Facebook's algorithm serves a fabricated meteor video to 3 million people, it's not passively transmitting content—it's making an editorial decision about what deserves distribution. The platform has simply automated that decision-making process and removed any accountability for the outcomes.
What's most revealing is how little pushback these videos generate from the platform itself. Facebook has content moderation policies. It has misinformation guidelines. It has partnerships with fact-checking organizations. None of that infrastructure activates for AI-generated lunar impacts because the videos don't technically violate any specific rule. They're not political misinformation. They're not hate speech. They're not even explicitly fraudulent—just fabricated spectacle optimized for engagement. The moderation systems were built to catch specific categories of harmful content. They were never designed to filter out the ambient noise of algorithmically boosted fabrication.
The meteor videos also expose the limits of media literacy as a solution. The usual refrain when this kind of content goes viral is that users need to be more critical, more skeptical, more discerning about what they share. But that framework assumes people are encountering this content in a context where truth-testing is even possible. On Facebook, the meteor video appears between a friend's vacation photo and a recipe video. There's no byline, no source attribution, no signal that this requires verification. The platform has spent years training users to treat the feed as entertainment, not information. Asking people to suddenly activate their critical faculties for a three-second video in an endless scroll is asking them to fight against the behavioral conditioning the platform itself installed.
The economic incentives make this unsolvable at the platform level. Facebook makes money when people stay on the platform. AI-generated fabrications keep people on the platform. Cracking down on fake meteor videos would require building detection systems, hiring moderators, and reducing the volume of high-engagement content in the feed—all of which would hurt the bottom line. The platform has no financial reason to fix this. The fabrications aren't driving users away. They're driving engagement up.
What the meteor videos make visible is the final evolution of social media's relationship with reality: the feed has become a space where truth and fabrication are functionally equivalent as long as they generate the same engagement metrics. The algorithm doesn't care if the meteor is real. The users scrolling past don't have the tools to verify it. And the platform has no incentive to intervene. This isn't a bug in the system. It's the system working exactly as the business model requires.
The meteors will eventually be replaced by the next wave of AI-generated spectacle—fabricated celebrity moments, invented historical footage, deepfake disasters that never happened. The specific content doesn't matter. What matters is that the infrastructure is now in place to distribute fabrication at the same scale and speed as reality, with no mechanism to distinguish between them. Facebook didn't set out to build a platform where fake meteors get more distribution than real science. But that's what happens when you optimize for engagement and treat truth as an externality. The algorithm has reached its final form: pure fabrication, perfectly optimized, infinitely scalable, and utterly indifferent to whether any of it actually happened.