The Ghost in the LLM: Why 'AI Usage Metrics' are a Predatory Goldmine
Verified Researcher
Feb 14, 2026•3 min read

The Metric Morphine: Why We’re Addicted to Ghost Clicks
Usage metrics aren't just data; they are the currency of survival in the 'Publish or Perish' economy. But here is the hard truth: the industry’s desperate pivot toward tracking "agentic usage" (where AI bots, not humans, consume scholarly content) is not an evolution. It is a surrender. We are currently witnessing the birth of a new kind of vanity metric, one that predatory publishers are already salivating over.
We’ve spent decades fighting the Impact Factor obsession. Now, the new pitch is that a "Zero-Click" summary by an AI agent somehow mirrors scholarly value. This is a total delusion. When a person reads a paper, you get critical synthesis. When a bot scrapes a paper for a snippet, it is just processing tokens. Treating the two as equals turns scholarship into a commodity like crude oil, where we only care about volume and ignore the truth.
The Dawn of Artificial Impact
In early 2026, we are seeing a frantic rush to standardize these "machine-consumable knowledge objects." While the intentions of developers at places like LibLynx and Research Solutions are to bring transparency to the "elephant in the room," the infrastructure they are building will inevitably be weaponized by the dark underbelly of publishing.
If we pivot to a world where "AI Reads" and "AI Citations" dictate a journal's status, we are basically setting up a high-frequency trading floor for scammers. Predatory journals, already experts at high-volume junk, will just turn loose their own botnets to "read" and "cite" their work millions of times. These agents use APIs and protocols like the Model Context Protocol (MCP) to rack up more engagement in ten seconds than a real university library manages in a whole year.
As Michelle Urberg and Chris Bendall noted in their Feb 12, 2026, guest post on technological shifts, the gap between human and machine usage is widening, and our current standards like COUNTER were never built for an era where the primary consumer of science is a non-sentient algorithm.
The Data Skeptic's Warning: Content Chunks vs. Deep Thought
The push to slice content into snippets for AI consumption is effectively killing the Version of Record. By turning an article into a pile of "knowledge objects," we lose the methodology and the doubt that keep science honest. This fragmentation is a gift to bad actors. They don't need to build a coherent study if they can just produce a "chunk" for an LLM to grab. If the metric is AI Citations, the quality of the actual data is a side thought. We are creating a system that rewards "AI-Bait", papers designed for bots, even if the math is fake.
Reclaiming Integrity: Two Radical Proposals
We cannot "standardize" our way out of this if the standard itself treats bot traffic as equivalent to human research. To prevent the complete commoditization of fraudulent science, we need two structural shifts:
First, we need Metric Bifurcation. We have to keep Human Usage and Agentic Consumption completely separate. Any journal mixing them into one score should be blacklisted. Second, we need a Provenance Tax. If a bot uses a data chunk, it must link back to a verified, human-authored DOI with a clear history. If you can't prove the human work, the bot hits count for zero.
Stop looking for the elephant in the room and start looking at the vultures circling the data. If we prioritize machine-readability over human-verifiability, we aren't advancing science; we're just feeding the machine that will eventually eat our credibility.



Discussion (8)
Join the conversation
Login or create an account to share your thoughts.
This feels like impact factor 2.0, but worse.
Spot on.
I manage an institutional repository and we are seeing this 'ghost usage' spike month over month without any corresponding increase in citations. The gap is real.
A very timely piece indeed. My colleagues and I were just discussing how these 'AI Reads' might skew our annual collection reviews. We must be careful!
Is there any talk of an open-source alternative to these proprietary tracking gateways? It seems dangerous to let a few major players define what 'utility' looks like in the age of agents.
so basically we are being tracked even when we dont click anything lol
if the publishers control the gateway they control the truth. common sense.
The transition from COUNTER standards to 'chunk-level' tracking feels like a privacy nightmare waiting to happen. How do we ensure these predatory metrics don't just become another way to squeeze library budgets?