Imagine opening your news feed, eager for the latest updates, only to be greeted by headlines that are not only misleading but downright fabricated. That’s the reality Google is pushing with its AI-generated headlines in Google Discover, and it’s a problem that’s far from resolved. Back in December, I exposed how Google had started replacing legitimate news headlines from The Verge and other outlets with AI-generated clickbait nonsense. While it seemed like Google was stepping back from this experiment, the tech giant now insists these AI headlines are here to stay, claiming they ‘perform well for user satisfaction.’ But here’s where it gets controversial: are users truly satisfied, or are they being misled by AI that doesn’t understand the nuances of truth and relevance?
To put it simply, these AI headlines are like a bookstore slapping fake covers on books—except in this case, the ‘bookstore’ is your Google Discover feed, and the ‘covers’ are AI-generated lies. For instance, Google’s AI recently claimed, ‘US reverses foreign drone ban,’ linking to a PCMag story that explicitly debunked this very claim. Is this the future of news consumption? Where AI prioritizes engagement over accuracy?
Let’s dive deeper. PCMag’s Jim Fisher, whose story was misrepresented, told me over the phone, ‘It makes me feel icky. I’d encourage people to click on stories and read them, and not trust what Google is spoon-feeding them.’ He’s right—Google should stick to using human-written headlines or the summaries publications already provide. Instead, Google labels these AI-generated snippets as ‘trending topics,’ even though they often masquerade as legitimate stories, complete with our images and links, all without proper fact-checking.
While Google’s AI headlines have improved slightly since last month—fewer egregious clickbait examples and longer headlines—the core issue remains: the AI has no clue what’s new, relevant, or true. It’s like a well-intentioned but clueless intern who keeps mixing up stories. For example, on December 26th, Google announced ‘Steam Machine price & HDMI details emerge’—details that never actually emerged. On January 11th, it proclaimed ‘ASUS ROG Ally X arrives,’ even though the device had been out since 2024. And don’t get me started on the bait-and-switch headlines for Verge stories, which not only mislead readers but also undermine our ability to market our own work.
Take my colleague Jay Peters’ story about RGB stripe OLED monitors. Google reduced it to the bland ‘New OLED Gaming Monitors Debut.’ My immersive 3D demo of the Lego Smart Brick? Google turned it into ‘Lego Smart Play launches March 1,’ a date that was old news by then. And our CES 2026 Verge Awards story? Google’s AI summarized it as ‘Robots & AI Take CES,’ which was the opposite of our actual conclusion. Is this AI helping or hindering journalism?
And here’s the part most people miss: Google’s AI isn’t even replacing the worst human clickbait. It left untouched a headline like ‘Star Wars Outlaws Free Download Available For Less Than 24 Hours,’ which turned out to be a giveaway of just one copy of the game, open only to UK residents. So, is Google’s AI really improving the news experience, or is it just another tool for manipulation?
Google spokesperson Jennifer Kutz defended the feature, stating it helps users explore topics covered by multiple creators. But when asked for further explanation, Google declined an interview. Meanwhile, these AI headlines are spreading beyond Discover, appearing as push notifications that lead to Google’s Gemini chatbot. Is this the future we want? Where AI controls what we see and how we see it?
Changes like these are why The Verge now offers a subscription—a move necessary to survive in a world where Google’s dominance threatens independent journalism. And let’s not forget: Vox Media, The Verge’s parent company, has filed a lawsuit against Google over its illegal ad tech monopoly. So, what do you think? Is Google’s AI a step forward or a dangerous precedent? Let’s discuss in the comments—I want to hear your take.