Publisher Traffic Is Collapsing. Here Is Why GEO Practitioners Should Care.
Small publishers lost 60% of search referrals. Google beat a publishers antitrust suit. For brands, the game is becoming a source inside the answer layer.
New BrightEdge data reveals Google AI Overviews surface negative brand mentions 44% more often than ChatGPT, with each engine painting different brand pictures at different moments in the customer journey.
AI-generated summaries are not neutral mirrors of the web. They shape how brands are perceived at scale — and new data suggests they do so unevenly.
A March 2026 report from BrightEdge found that Google's AI Overviews surface negative brand sentiment more often than ChatGPT does. The gap: about 2.3% of brand mentions in AI Overviews carry negative sentiment, compared to 1.6% in ChatGPT. BrightEdge characterized this as roughly 44% more likely negativity from Google overall.
That fraction sounds small. It is not.
At AI scale, every fraction of a percent translates into millions of brand-negative impressions per month. A negative AI response is not a one-off search result buried on page three. It is served repeatedly to every user asking a similar question, systematically shaping demand at a scale traditional search never could.
The positive-to-negative ratios tell the broader story: Google AI Overviews run at 21:1 (positive to negative), while ChatGPT sits at 27:1. Both engines are overwhelmingly positive. But when they go negative, Google goes there more often.
The more interesting finding is not the gap in frequency — it is the gap in type.
BrightEdge's analysis showed Google AI Overviews skew heavily toward controversy-driven negativity. Lawsuits, boycotts, data breaches, regulatory actions, and product recalls account for 32% of categorized negative mentions. Google is 4.5x more likely than ChatGPT to surface this kind of news-driven criticism.
ChatGPT takes a different angle entirely. It is 3x more likely to flag product limitations, compatibility issues, and feature shortcomings. Its criticism clusters around purchase-phase evaluation — the "is it worth it?" questions that sit closer to a buying decision.
In other words, Google tends to tell users your brand is in trouble. ChatGPT tends to tell users your product has weaknesses. Both matter, but they hit at different moments in the customer journey.
Perhaps the most striking data point: when BrightEdge analyzed overlapping prompts where both engines surfaced negative sentiment, Google and ChatGPT disagreed on which brand to flag 73% of the time. Same query, different targets.
This divergence is driven by fundamentally different source ecosystems. Google leans into news-driven sourcing and controversy indexing. ChatGPT draws more heavily from product reviews, forums, and social discussions. The result is that monitoring your brand reputation in only one AI engine gives you an incomplete and potentially misleading picture.
The gap is not uniform across sectors. BrightEdge broke out data across three verticals:
Apparel is a notable outlier — one of the few categories where ChatGPT is more negative than Google. BrightEdge did not explain why, but the product-evaluation bias in ChatGPT's architecture likely plays a role in a category where fit, quality, and "is it worth the price?" questions dominate.
Both engines concentrate their criticism in the early research phase, but Google does so more aggressively. Negative sentiment appears during informational queries 85.1% of the time in AI Overviews, compared to 68.5% in ChatGPT.
This means Google is more likely to shape a user's first impression of your brand negatively — before they have even started comparing options. ChatGPT distributes its criticism more evenly across the journey, including closer to the point of purchase where it can directly influence conversion.
Google disputed the interpretation. A spokesperson told Business Insider the methodology exaggerated the difference and that its AI Overviews simply reflect what web sources say. The actual gap, Google argued, is "less than a single percentage point."
That framing is technically accurate — the raw difference between 2.3% and 1.6% is 0.7 percentage points. But BrightEdge's point was about relative likelihood, not absolute spread. And at the query volumes these systems handle, 0.7 points is a significant number of real brand impressions.
The immediate implication is clear: brands need to monitor their reputation across multiple AI engines, not just one. Marketers cited in Fortune's coverage are already leaning into generative engine optimization and answer engine optimization tactics to influence how their brands are framed inside AI summaries.
Specific steps worth considering:
The BrightEdge data does not mean Google's AI is biased against brands. It means different AI systems, built on different source architectures, produce different brand narratives — and the brands that understand this asymmetry will be the ones managing it proactively.
James Calder is the editor of The Search Signal, covering AI-powered search, generative engine optimization, and the future of brand discovery.