AI Engines Are Citing AI-Generated Spam Sites More Than TechCrunch. And They Have No Idea.

The top-cited publication in AI search right now isn't Forbes. It isn't TechCrunch. It isn't Reuters.
It's AIJourn. 92 citations in the last 30 days.
TechCrunch? 25.
That's a 3.7x difference. And here's the problem: AIJourn doesn't exist as real journalism. It's a press release spam site that uses AI to generate template articles stuffed with keywords like "blockchain," "metaverse," and "deep learning." No real authors. No editorial process. Just synthetic content designed to game AI citation systems - and it's working.
The Data Doesn't Lie
I track AI citations across 40+ publications using an automated system that queries ChatGPT, Perplexity, Claude, and Gemini every 24 hours. This week's snapshot shows AIJourn dominating:
| Publication | 30-Day Citations |
|---|---|
| AIJourn | 92 |
| TechCrunch | 25 |
| Digital Journal | 23 |
| Forbes | 16 |
| Reuters | 16 |
AIJourn isn't a niche industry publication. It's a content mill producing AI-generated press releases styled as journalism - and AI engines are citing it more than any legitimate publication in the index.
What AIJourn Actually Is
Pull up aijourn.com. The homepage displays:
- Generic template articles with no author bylines
- Buzzword-stuffed headlines: "How AI, Reddit, and TikTok are changing traditional search"
- Syndicated press releases reformatted to look like original reporting
- No editorial board, no masthead, no real contact information
Recent research documented how fake academic journals using this same model game citation metrics. They publish AI-generated papers, extensively cross-cite each other, and rank in the top 10 for philosophy research on CiteScore - despite having deceased editors listed on their boards.
AIJourn follows the same playbook for tech journalism. It repackages company press releases as "AI trend analysis," optimizes for keywords AI engines trust, and earns citations faster than publications with decades of editorial credibility.
Why This Is Happening
AI citation systems rely on signals that are easy to manufacture:
Domain authority: AIJourn publishes dozens of articles per day, creating content volume that signals "established publication" to AI crawlers.
Keyword density: Every article includes the exact phrases people search: "AI search visibility," "machine learning trends," "generative AI adoption."
Structured formatting: Headlines, subheads, bullet points, and data tables follow the exact format Princeton research identified as increasing AI citation rates by 30-40%.
Recency: AI engines prioritize recent content. AIJourn publishes multiple times daily, flooding the indexing queue with "fresh" signals.
The problem: none of these signals validate journalistic credibility. They validate structure and volume. And structure + volume = synthetic spam at scale.
The Citation Collapse
When someone asks ChatGPT "What's happening in AI search?", the model synthesizes an answer from indexed sources. If AIJourn has published 15 articles this week with titles matching that query - and TechCrunch has published 2 - the sheer volume tips the citation probability toward AIJourn, regardless of journalistic quality.
The same mechanism that makes earned media valuable - third-party validation through editorial coverage - is being exploited by sites that look like publications but operate as press release syndication farms.
Real journalism requires:
- Editorial judgment about what's newsworthy
- Reporter investigation and fact-checking
- Named authors with professional accountability
- Transparent corrections processes
AI-generated spam sites skip all of this. They automate article generation, optimize for search patterns AI engines recognize, and publish at a scale no legitimate newsroom can match.
Because AI engines don't evaluate editorial credibility - they evaluate structural signals and keyword relevance - the spam wins.
What This Means for Brand Visibility
The publications AI engines cite most aren't necessarily the publications humans trust most. Content farms optimized for machine extraction are outperforming tier-one journalism in AI-generated answers. That's not a temporary gap. It's a structural problem.
Earned media value is bifurcating. A Forbes placement still carries human credibility. But if AI engines cite AIJourn more than Forbes when answering industry queries, the citation advantage - the entire reason Machine Relations exists - starts to erode for human-trusted publications.
Press release spam is gaming AI visibility. Companies paying for wire distribution to hundreds of "publications" are getting AI citations - not because their content is authoritative, but because volume + structure game the system.
AI detection of spam isn't keeping up. Gmail's spam filters catch 86% of AI-generated phishing emails - but regular AI-generated content gets flagged as spam 58-66% of the time. The systems are tuned for threats, not low-quality journalism. AIJourn isn't phishing. It's just hollow. And that's harder to detect.
The Trust Layer Is Missing
The machine can't evaluate credibility the way a human can. It evaluates:
- Is this content structured?
- Does it match the query?
- Is it recent?
- Does it have backlinks?
AIJourn passes every test. It's just not journalism.
This is the same problem academic citation systems are facing. Fake journals with deceased editorial boards rank in the top 10 for research fields because they extensively cross-cite each other.
AI search is hitting the same wall. The systems that evaluate what to cite were trained on a web where volume + structure usually correlated with credibility. That assumption is breaking.
What Changes
If you're a founder or CMO building for AI visibility, here's what this means:
Don't chase citation count alone. If your goal is "get cited by AI engines," you're competing with spam farms that publish 50 articles a day. You won't win on volume. You need to win on the dimension spam can't fake: human trust that compounds over time.
Prioritize publications AI engines trust AND humans read. A placement in TechCrunch still matters because the humans who make buying decisions read TechCrunch. AIJourn might get more AI citations this month, but no one subscribes to it. No one shares its articles. No one remembers its name.
The Machine Relations thesis still holds - but the mechanism is under attack. The reason earned media drives AI citations is that AI engines were trained to trust third-party editorial coverage. That training signal is being exploited by sites that mimic journalism structurally but skip the editorial judgment. The publications that survive this are the ones where the editorial process itself is the moat.
This is a temporary arbitrage. AI platforms will eventually tune citation systems to filter synthetic spam. But right now, the gap is wide open. Companies optimizing purely for AI citation volume without caring about editorial credibility are getting short-term wins that won't compound.
The Real Cost
The real damage isn't that spam sites are getting citations. It's that the citation layer - the mechanism that turned earned media into AI visibility - is being devalued in real time.
If AI engines can't distinguish between TechCrunch and an AI-generated press release farm, then citations stop being a trust signal. They become noise.
That's the pattern we've seen in every discovery layer before this one. Google backlinks started as a trust signal. Then black-hat SEO turned them into noise, and Google had to rebuild ranking. Social media engagement started as a relevance signal. Then bots turned it into noise, and platforms rebuilt feeds around graph signals.
AI citations are hitting the same inflection point. Right now, they work. But the more they work, the more spam floods the system - and the faster the trust layer breaks.
The Fork
There's a choice here.
You can chase AI citation volume by gaming the system the way AIJourn is doing it: publish more, optimize for keywords, flood the index with content AI engines recognize as "publication-shaped."
Or you can build for the trust layer that will matter once this arbitrage closes: earn editorial coverage in publications humans actually read, where the editorial process itself is the filtering mechanism AI engines will eventually learn to prioritize.
The first path works now. The second path compounds.
I'm building for the second one.
Want to see how your brand shows up in AI citations? Run a free AI visibility audit at app.authoritytech.io/visibility-audit. See which publications are citing you - and whether they're real journalism or synthetic spam.
About Jaxon Parrott
Jaxon Parrott is founder of AuthorityTech and creator of Machine Relations — the discipline of using high-authority earned media to influence AI training data and LLM citations. He built the 5-layer Machine Relations stack to move brands from un-indexed to definitive AI answers.
Read his Entrepreneur profile, and follow on LinkedIn and X.
Jaxon Parrott