The Death of the Citation: How AI Search Kernels Will Sanitize Scientific Fraud
Verified Researcher
Dec 14, 2025•3 min read

The End of the 'Audit Trail'
Google’s recent shift from a "Learn More" button to an "AI Mode" summary isn't just a UI tweak; it’s the final nail in the coffin for the primary source. For decades, the search engine served as a messy, chaotic, but ultimately verifiable map of human knowledge. If you wanted to verify a claim, you clicked through to a PDF or a news site. You saw the author. You saw the institutional affiliation. You saw the date.
By moving to generative synthesis, Google is basically laundering information. In the world of scholarly publishing, this is a disaster. When an AI replaces the results page, it kills the need to actually check things. It offers a smooth, fake sense of truth that hides the mess of real research. We are moving from a search economy to one based on pure, blind trust. In science, that is how you get scammed.
Synthetic Authority and the Predatory Loophole
Predatory journals have always relied on the laziness of the hurried researcher. Now, Google’s AI mode is providing them with the perfect camouflage. If the AI aggregates data from a low-quality, pay-to-play predatory outlet and blends it with findings from Nature, the end user, the student, the policymaker, or even the distracted peer reviewer, loses the ability to distinguish the signal from the noise.
As David Crotty noted in his recent analysis of Google’s Year in Search, the jump from discovery to AI-generated answers marks a big shift in how we look at the past. This is dangerous for the historical record. When a search engine stops being a door and starts being an author, it takes on editing duties it cannot handle. It will always pick the most confident voice over the most accurate data.
The Rise of the 'Ghost Citation'
We are entering an era of "Ghost Citations." I predict that by 2026, we will see a surge in papers that cite studies that don't exist, simply because an AI summary hallucinated a connection during a researcher’s "AI Mode" literature review. The structural flaw here is the loss of the provenance chain. If you don't see the journal's name on the search results page, you don't see the red flags, the hijacked domain, the 24-hour peer review turnaround, or the lack of a COPE membership.
The glossy 2025 wrap-up from Google hides a grim reality: we are losing our grip on the primary source. When source material is tucked away behind a chat interface, the predatory publishers win. Their junk science looks exactly like the real thing to a language model. It's a goldmine for frauds.
Radical Reform: The Decentralized Registry
To survive this, we must move beyond the search engine as our gatekeeper. I propose two radical shifts:
Hard-Coded Integrity Metadata: Every digital object identifier (DOI) must carry a cryptographically signed 'Integrity Score' that AI aggregators are forced to display. If the AI summarizes a paper from a blacklisted or unvetted source, a digital 'warning label' must be injected into the LLM output.
The Death of the Summary: We must legally and ethically demand 'Audit Mode', a setting where AI search tools are prohibited from synthesizing scientific claims without presenting the full citation and the publisher's credentials alongside the text.
The search engine is dead. Long live the algorithm. But unless we fight for transparency now, the algorithm will be written by the highest bidder in the predatory publishing market.



Discussion (9)
Join the conversation
Login or create an account to share your thoughts.
Terrifying.
this is going to kill small journals entirely if nobody ever clicks through to the source link
Efficiency shouldn't come at the cost of provenance. If we lose the citation, we lose the thread of human discovery.
In my time we had to verify sources manually in the library! Now these machines just give you a summary without a single footnote. Dangerous times indeed.
The assumption that AI will 'sanitize' fraud is incredibly optimistic. It's more likely to hallucinate a veneer of credibility over existing bad data, making it even harder to spot the original errors.
Spot on.
I deals with this at the lab daily; we're already seeing students cite AI summaries of papers that don't even exist instead of the primary literature.
Who audits the kernel?
it finally happened research is just becoming a vibes based economy