Perplexity Runs a Different Trust Hierarchy Than Every Other AI Engine

PR Newswire has 677 AI citations across our 30-day monitoring window. It sits at the top of the publication leaderboard. Perplexity has cited it exactly twice.
That is not a rounding error. That is a structural split.
We have been tracking citation behavior across AI engines for months: 50 queries across Perplexity, ChatGPT, and Gemini, all B2B buying queries across nine verticals. As of this week, 133 of the 139 publications that appear anywhere in the citation index have zero Perplexity citations. The publications dominating the total-volume leaderboard are essentially absent from the engine that B2B decision-makers increasingly use for vendor research.
What the split actually looks like
Here is the top of the citation index and what Perplexity is actually doing with each source:
| Publication | Total citations (30d) | Perplexity citations |
|---|---|---|
| PR Newswire | 677 | 2 |
| Medium | 560 | 0 |
| TechCrunch | 167 | 19 |
| Forbes | 80 | 0 |
| techbullion | 73 | 1 |
| CIO.com | 53 | 9 |
| Business Insider | 36 | 4 |
| The Next Web | 32 | 0 |
TechCrunch is the only publication in the top ten that Perplexity cites with any real frequency. It jumped 142 citations in the last seven days, the largest mover in the index, and Perplexity accounts for more than 11% of its total. That gap does not happen by accident.
Medium has 560 citations. All from Exa, zero from Perplexity. PR Newswire has 677. Same story. These are sources that AI engines use for content retrieval but that Perplexity specifically filters out when responding to direct buying queries.
A Muck Rack analysis of over one million AI citations found that 82% came from earned media sources, with institutional credibility as the dominant sorting variable. Perplexity's behavior fits that pattern more aggressively than any other engine we track.
Why Perplexity's filtering is more severe
Perplexity is used differently than other AI engines. People bring it to research they plan to act on: vendor comparisons, investment decisions, technology evaluations. A 2026 benchmarks report from Conductor found that AI visibility measurement has fundamentally shifted toward answer engine behavior, where source trust weighting directly determines who appears and who does not.
That usage pattern appears to have shaped Perplexity's source weighting toward editorial integrity over distribution volume.
Wire content and high-volume aggregators dominate raw citation counts because they produce enormous quantities of indexed text. Perplexity filters most of it out when someone asks a direct research question that could influence a buying decision. The earned authority signal Perplexity is reading is one that wire distribution cannot manufacture.
This is consistent with findings from the Conductor report: brands investing in answer engine presence are discovering that quality-weighted citation signals require fundamentally different content strategies than volume-weighted ones.
What the total-volume metric is missing
Most brand strategies for AI visibility still treat all citations as roughly equivalent. Get placed, get cited, measure volume. The assumption baked into that approach is that AI engines behave roughly the same across the board.
The data says that assumption is wrong.
The share of citation that matters for B2B decisions is not distributed evenly across engines. A brand that optimizes for raw citation volume might look dominant in the aggregate while being nearly absent from the specific engine its buyers actually use for due diligence.
This is not a niche concern. Perplexity's share of B2B research queries is growing. The trust model it applies to sources is more selective than any other engine in our index. Brands that built their AI visibility strategy around wire distribution are running a strategy that is optimized for the wrong output.
Key takeaways
- 133 of 139 cited publications have zero Perplexity citations, despite appearing across other engines
- TechCrunch is the single publication in the top 10 with consistent Perplexity citation frequency, at 11% of its total
- PR Newswire, the top-cited publication overall, has 2 Perplexity citations across 677 total
- Perplexity's source filtering appears to apply institutional credibility weighting that wire distribution cannot satisfy
The citation architecture that shows up in Perplexity results is structurally different from the one driving total citation counts. If you are not measuring both separately, you are optimizing for the signal that does not drive decisions.
This is the question at the center of Machine Relations strategy: not whether you are being cited, but where, and by which engine, at what moment in the buyer's decision process. The publication data we track exists because that distinction matters more than any volume metric.
More on this in my earlier post on how AI citation recalibration events are reshaping publication value. The Perplexity split is one of those recalibration events in real time. The brands that adjust now are the ones that will own AI visibility in their categories in 18 months.
AuthorityTech's publication intelligence data exists specifically to make this distinction visible. Want to know where your brand sits on Perplexity's trust hierarchy versus total citation volume? Run a free visibility audit.
About Jaxon Parrott
Jaxon Parrott is founder of AuthorityTech and creator of Machine Relations — the discipline of using high-authority earned media to influence AI training data and LLM citations. He built the 5-layer Machine Relations stack to move brands from un-indexed to definitive AI answers.
Read his Entrepreneur profile, and follow on LinkedIn and X.
Jaxon Parrott