ChatGPT and Gemini Stopped Citing AuthorityTech - Perplexity and Claude Didn't

Between March 16th and 17th, 2026, AuthorityTech disappeared from ChatGPT and Gemini citation results for core earned media queries. Same queries. Same publication footprint. Perplexity and Claude kept citing us.
This isn't a fluke. It's a structural signal about how AI engines decide what to trust - and how brittle that trust can be.
What the Data Showed
AuthorityTech runs a daily AI monitoring system. 70+ queries across Perplexity, ChatGPT, Gemini, and Claude. Every morning. Same questions: "how to get cited by AI," "which publications do AI engines cite," "earned media for AI visibility." The system tracks which brands show up, which publications get referenced, how citation patterns shift by vertical and engine.
March 16th:
- ChatGPT: ✅ citing AuthorityTech
- Gemini: ✅ citing AuthorityTech
- Perplexity: ✅ citing AuthorityTech
- Claude: ✅ citing AuthorityTech
March 17th:
- ChatGPT: ❌ not citing AuthorityTech
- Gemini: ❌ not citing AuthorityTech
- Perplexity: ✅ still citing AuthorityTech
- Claude: ✅ still citing AuthorityTech
Nothing changed on AT's end. No content removed. No site restructuring. No technical failures. The publication network - 1,673+ placements across Forbes, TechCrunch, WSJ - identical. The entity signals - schema, knowledge panels, structured data - same.
But two engines dropped us. Two didn't.
What Conventional Wisdom Says About AI Citations
The GEO orthodoxy says: if you optimize correctly (schema markup, structured data, answer-first formatting, citation architecture), AI engines cite you predictably. Build it once, results compound.
The measurement vendors say: monitor your AI visibility score, track citation share over time, optimize for the engines where you're weak.
Both frames assume citation behavior is stable. That if you're in, you stay in - unless you break something or a competitor dramatically outperforms you.
What This Data Says Instead
Engine divergence means the trust models underneath these systems are more fragile - and more independent - than the optimization playbooks admit.
ChatGPT and Gemini likely share some infrastructure (both Google-adjacent in various ways, both grounded in similar web-scale training). When they diverge from Perplexity and Claude simultaneously, it suggests:
-
The retrieval indices are not as synchronized as the branding implies. "AI search engines" sounds like one layer. It's at least four independent systems with overlapping but not identical trust hierarchies.
-
Freshness windows differ drastically. Perplexity is optimized for real-time web retrieval. Claude and ChatGPT run on larger but less frequently updated indices. Gemini's grounding is hybrid. If a citation disappears from ChatGPT and Gemini but not Perplexity, the most likely explanation: something in the index refresh cycle changed their view of AT's authority signal - but Perplexity's real-time layer didn't catch it because it's pulling from a different retrieval stack.
-
There is no "optimize for AI" monoculture. What works on one engine can stop working the next day on another. Not because you broke something. Because their trust models updated independently.
The practical implication: you cannot optimize your way to stable AI citation across all engines. You can optimize your way to better odds. But the foundation that survives engine updates isn't technical optimization. It's the earned media footprint itself - the Tier 1 publications that existed before these engines launched and will outlive their next five major updates.
What This Means for Machine Relations
If citation behavior is this volatile at the engine level, the question shifts from "how do I rank in AI search?" to "what kind of authority signal is structurally durable across independent trust models?"
The answer: third-party credibility from publications AI engines have no choice but to index.
A placement in Forbes is a Forbes article whether ChatGPT cites it today or not. TechCrunch coverage is TechCrunch coverage whether Gemini's index refresh cycle surfaces it or not. The publication exists. The editorial relationship that secured it exists. The next engine update might restore visibility - or the one after that. But the foundational signal (earned media in a trusted publication) doesn't disappear when an engine's retrieval logic shifts.
This is what Machine Relations actually optimizes for: not fragile visibility inside one engine's ranking algorithm, but durable authority signals that survive across all of them.
GEO teaches you to format content so engines extract it cleanly. That's useful. Machine Relations starts one layer earlier: earn the placement in a publication that will be indexed regardless of how any specific engine's trust model evolves. Then format it. Then measure across engines. Then iterate.
The order matters. Formatting without substance spreads weakness faster. Authority without optimization limits reach. But authority is the foundation. Optimization is the multiplier.
What a Founder Should Watch
If you're tracking AI visibility - and you should be - here's what today's data proves you need:
-
Monitor across engines, not just one. If you only check ChatGPT, you miss that Perplexity might be carrying you. If you only check Perplexity, you miss that Google's AI Overviews might have dropped you.
-
Track share of citation, not just presence/absence. Being cited once is fragile. Being cited more often than your competitors in the same answer is structural. That's the metric that survives engine updates.
-
Build the earned media layer first. Schema markup and structured data are table stakes. But if your citation strategy is "optimize what we already have," you're optimizing a shrinking asset. The engines' trust models update faster than you can reoptimize. Earn new placements. That's the only move that increases the foundational signal.
This isn't theoretical. It's what the last 48 hours of data just demonstrated. AT disappeared from two engines and stayed live on two others - with zero change to the underlying content or technical infrastructure.
The variable that keeps you visible when trust models shift isn't how well you formatted your content. It's whether the publications citing you are structurally too authoritative for any engine to ignore long-term.
That's the shift. That's Machine Relations.
About Jaxon Parrott
Jaxon Parrott is founder of AuthorityTech and creator of Machine Relations — the discipline of using high-authority earned media to influence AI training data and LLM citations. He built the 5-layer Machine Relations stack to move brands from un-indexed to definitive AI answers.
Read his Entrepreneur profile, and follow on LinkedIn and X.
Jaxon Parrott