Moxie Marlinspike built Signal as the gold standard for encrypted messaging—a tool designed explicitly to keep tech giants out of private conversations. Now he's bringing that same encryption technology to one of those giants. According to WIRED, Marlinspike says the technology powering his encrypted AI chatbot, Confer, will be integrated into Meta AI, potentially protecting the AI conversations of millions of people.
The move marks a significant strategic shift—not just for Marlinspike, but for the broader privacy advocacy community. For years, the prevailing wisdom among privacy-focused technologists was to build alternatives: create encrypted platforms that exist entirely outside the surveillance infrastructure of Big Tech. Signal itself was the embodiment of that philosophy—a nonprofit messenger app that refused to collect user data, monetize attention, or compromise on end-to-end encryption. It was the anti-Facebook.
But Marlinspike's decision to work with Meta suggests a different calculation: that encryption technology might protect more people by integrating into the platforms they already use rather than trying to convince them to leave. Meta AI has the scale Signal never will. If Confer's encryption can actually secure AI conversations on a platform with billions of users, that's a bigger privacy win than a beautifully encrypted app that only reaches the privacy-conscious few. It's pragmatism over purity—and it's a bet that the infrastructure itself can be made safer without requiring mass migration.
This strategy mirrors what's happening across the creator economy and digital infrastructure more broadly. Legacy institutions are hiring platform-native talent to build new revenue streams inside existing systems rather than launching competing platforms. Independent voices are negotiating for better terms within YouTube, TikTok, and Spotify rather than abandoning them for decentralized alternatives. The pattern is consistent: work inside the system, extract concessions, and try to make the infrastructure less extractive from within.
The risk, of course, is co-optation. Meta has a documented history of acquiring potential competitors and neutralizing them—Instagram and WhatsApp both started as independent platforms with distinct values before being absorbed into Meta's advertising apparatus. WhatsApp, in particular, was founded on a privacy-first ethos that has been steadily eroded through integration with Meta's broader data ecosystem. Marlinspike's technology could follow a similar path: lauded at launch, then quietly weakened through product updates, policy changes, or simply deprioritization as Meta's business needs shift.
But the alternative—continuing to build privacy tools that only reach a niche audience—has its own limitations. Signal has roughly 40 million users. Meta's platforms have more than three billion. If the goal is to protect as many people as possible, scale matters. And scale, in 2025, means working with the platforms that already have it. Regulatory pressure is forcing platforms to take privacy more seriously, creating an opening for technologists like Marlinspike to embed encryption directly into mass-market products.

The Confer integration also reveals how AI conversations are becoming a new frontier for privacy infrastructure. As more people use AI chatbots for sensitive queries—health questions, financial planning, personal advice—the privacy implications of those conversations become harder to ignore. Unlike social media posts, which users often intend to share publicly or semi-publicly, AI chats feel private by default. They're one-on-one interactions, often confessional in nature. If those conversations aren't encrypted, they're being logged, analyzed, and potentially monetized by the platforms hosting them.
Meta's willingness to integrate Confer's encryption suggests the company sees a competitive advantage in offering privacy-protected AI—or at least in being able to claim it does. Whether that's a genuine commitment or a branding strategy remains to be seen. But the fact that Marlinspike is willing to lend his reputation to the project suggests he believes there's a real technical safeguard being built, not just privacy theater.

The broader question is whether this marks a permanent shift in how privacy advocates approach Big Tech, or just a temporary détente. If Marlinspike's integration succeeds—if Meta AI conversations are genuinely encrypted in a way that even Meta can't access—it could set a precedent for other privacy-focused technologists to work with platforms rather than against them. If it fails, or if the encryption is quietly compromised, it will reinforce the argument that the only safe tech is independent tech. Either way, the move makes clear that the battle over digital privacy is no longer being fought at the margins. It's happening inside the platforms themselves.