Google announced AI Mode, a tab that answers long, complex or visual questions in a single capsule with links. According to Liz Reid, head of Search, usage of such queries grew more than 10% in the US and India.
AI Mode splits each request into many parts, fires hundreds of mini-searches, then stitches the answers. Google calls this “query fan-out” and says it digs deeper than classic search.
A tuned Gemini 2.5 model drives the feature. Google also previewed Deep Search, which writes a fully referenced report in minutes, and Search Live, which adds real-time camera chat. Google says AI Mode will surface more web links than a classic results page, keeping creators visible.
Why Does This Change Matter To Mozilla?
Firefox still earns over 80% of income from keeping Google as the default engine, according to a Ghacks report on the current antitrust trial. If the court stops renewal, Mozilla faces a cut in cash and must hunt for another partner. Microsoft’s Bing could pay, though its past bids were lower once Google stepped aside.
The open source outfit has already tried Perplexity inside Firefox, hinting at a pivot toward AI answer engines. Coherent Market Insights forecasts that AI search could pull in $108 billion each year by 2032, so a new tie-up might look tempting while facing the privacy puzzles it would raise for users.
What Did Researchers Learn About AI Search Accuracy?
The Tow Centre for Digital Journalism tested 8 AI search tools across 1,600 queries and found more than 60% ended in wrong answers. ChatGPT named 134 articles incorrectly and showed doubt only 15 times, while Gemini refused political topics even when publishers allowed crawling.
Grok and Perplexity retrieved paywalled text that their public crawlers should miss, showing that Robot Exclusion Protocol rules lack bite. Paid tiers looked sharper on first scan but racked up extra errors because they seldom admitted limits.
The study paints engines that value certainty over truth and leave readers guessing which parts to trust.
More from News
- UK Public Sector Leads Europe In GenAI Trials At 75% Adoption
- ZeroAvia Opens Electric Plane Factory Near Glasgow
- UK Government Cracks Down On Buy Now, Pay Later Companies
- How Equities First Financing Could Help Asia’s Growing Economies Weather the Impact of Tariffs
- Digital Skills Shortage Threatens UK Economy, Here’s How
- New Report From University Of Kent Uncovers Hidden History Of Fundraising For Hospital Scanner Technology
- 5 Lessons From The Industries With Britain’s Happiest Workers
- Why Are People Turning To AI Lawyers?
How Do Poor Citations Affect Newsrooms?
Google’s DeepSearch credited the wrong outlet in 115 of 200 samples, and Grok 3 linked to broken pages 154 times, the same study said. Bots often guide readers to syndicated copies on Yahoo News instead of the paper that paid for the work.
Mark Howard, chief operating officer at Time, warned that weak labelling drains traffic and erodes confidence in both publisher and tool.
Which Fixes Can Rebuild Trust?
Tow researchers call for public error dashboards, clear crawler names and a plain list of newsrooms feeding each model. Transparency would let editors audit performance and push for fixes.
Website owners also want an enforceable “do not crawl” switch that leaves classic ranking untouched. Without it, blocking a bot can feel like stepping off the open web altogether.
Designers can pin clear source panels beside every generated paragraph, nudging visitors toward the original page. Until those guards are in place, the safest habit is simple… whenever an AI search tool gives an answer, click through to the original report, read the full context and keep the habit of checking twice before sharing.