NEWWorld's first AI visibility audit tool for Web3 is live.Run free audit →
Free tool · 80+ prompts · 4 AI engines

AI citation prompts pack for crypto. Test how ChatGPT, Claude, Perplexity and Gemini cite your project.

80+ ready-to-paste prompts grouped by crypto vertical: DEX, wallet, L1, L2, DeFi, NFT, RWA, stablecoin. Each prompt is what a real user would actually ask. Copy a prompt, paste into the AI engine, see if your project gets cited. Built to pair with the Crawlux AI Visibility audit module.

Free · No signup · 9 crypto categories

// The prompts

Type your project name. Pick a category. Run the prompts.

Project name auto-fills into the prompts. Test across ChatGPT, Claude, Perplexity and Gemini to spot citation gaps.

Auto-fills into prompts that need a project name
Want the full picture?

Run a free Crawlux audit of your live domain

You tested prompts manually. Crawlux is our free audit tool — it runs hundreds of prompts in parallel against your domain and gives you a complete report with a quantified citation score, plus 7 other audit areas including token schema and technical SEO. Takes about 4 minutes. No signup, no credit card.

200+ Web3 brands audited · No credit card · No setup

// How it works

Three steps. Manual but fast.

No signup. No data leaves your browser. Custom prompts saved locally.

01

Pick your category and project name

9 crypto categories: DEX, wallet, L1, L2, DeFi, NFT, RWA, stablecoin, generic. Type your project name once, all prompts auto-fill it. 80+ prompts ready to test.

02

Copy a prompt and run it in 4 AI engines

One-click copy. One-click open in ChatGPT, Claude, Perplexity or Gemini. Run the same prompt across all four to see citation variance. Each engine answers slightly differently.

03

Note where competitors get cited and you do not

Citation gaps are your AEO opportunity. For automated tracking at scale, the Crawlux AI Visibility audit module runs hundreds of prompts and gives you a quantified citation frequency score.

// Why this matters

AI citations are the new top of funnel for crypto.

Six patterns this prompt pack surfaces about how AI engines treat crypto projects.

Citation versus mention

There is a difference between being mentioned in a list (low value) and being recommended as the answer (high value). The prompts pack tests both. "What is the best [X]" prompts test recommendation. "Compare X vs Y" prompts test mention with context.

Engine-specific variance

Same prompt, different answers across ChatGPT, Claude, Perplexity, Gemini. ChatGPT and Claude lean on training data. Perplexity always searches live. Gemini blends both. Running prompts across all four exposes where your AEO signals are working and where they are not.

Hallucination detection

Sometimes you get cited with wrong information: wrong audit firm, outdated tokenomics, stale partnership claims. The prompts pack surfaces this. The fix is mechanical: JSON-LD schema markup, llms.txt files, FAQ-shaped content. The AI Visibility audit module automates hallucination detection.

Competitor visibility benchmarking

Run the prompts. Note which projects each engine names. If 3 of your direct competitors get cited and you do not, the gap is measurable. Comparison prompts ("Compare X vs Y") are the most direct test of competitive positioning.

Vertical-specific framing

Generic "best crypto wallet" prompts give you generic answers. DEX-specific prompts that mention "lowest fees on Solana" or "deepest liquidity for [TOKEN]" surface vertical-leader projects. The pack is grouped by crypto vertical specifically for this reason.

Custom prompt extension

Your project has unique competitive comparisons or niche queries the pack does not cover. Add custom prompts directly to your local pack. They persist in browser storage. Test specific edge cases without losing the curated base.

// Common questions

Common questions about AI citation prompts

Patterns from crypto founders running AEO experiments in 2026.

What is the AI Citation Prompts Pack for?

It is a tested library of prompts that real crypto users would actually ask ChatGPT, Claude, Perplexity and Gemini. You use them to check whether AI engines cite your project when somebody is researching options in your category. The prompts are grouped by project type so you only see relevant ones.

Why do AI prompts matter for crypto projects?

In 2026 most crypto research starts with an AI engine, not Google. Users ask ChatGPT what wallet to use, what L2 is safest, what DEX has the best execution. If your project is not cited in the AI answers, you are functionally invisible to that audience. The prompts pack lets you measure this gap before you fix it.

Do I need separate prompts for ChatGPT versus Claude versus Perplexity?

The same prompt works across all four engines, but each engine answers differently. ChatGPT and Claude lean on their training data plus light web search. Perplexity always searches the web in real time. Gemini blends search and AI. Run the same prompt across all four to see the variance. The Crawlux AI Visibility audit module automates this comparison.

How often should I rerun the prompts?

Monthly is the right cadence for most crypto projects. AI engines update their indices on different schedules. Big news events (audits, exchange listings, major partnerships) typically take 2 to 6 weeks to propagate into citations. Run a baseline now, then track changes over months.

What does a good citation rate look like?

For category leaders, 80 percent plus of relevant prompts should cite you by name. For mid-tier projects, 30 to 60 percent is realistic. For pre-launch or new projects, 0 to 20 percent is normal until you build up authority signals. The percentage matters less than the trend over time.

What if I get cited but with wrong information?

More common than founders realize. Wrong audit firm citations, outdated tokenomics, stale partnership claims. The fix is to surface the correct information in machine-readable form: JSON-LD schema markup, llms.txt files, FAQ-shaped content on your docs. The AI Visibility audit module specifically detects this hallucination pattern.

Can I add my own prompts to the pack?

Yes. Open the "My custom" tab, paste your prompt with {PROJECT} as the placeholder for project name, and it gets added to the local pack. Your custom prompts persist in browser storage. Useful for testing specific competitor comparisons or niche queries.

Why are some categories missing prompts?

The pack ships with the 9 categories that cover roughly 90 percent of crypto projects. If your vertical is missing (e.g. prediction markets, derivatives, identity, oracles) use the Generic category as a starting template, then customize. Send a note via the Crawlux feedback form and we will add your vertical to the next release.

// The behavior shift

Why crypto AEO is no longer optional in 2026

Most crypto founders still think of SEO as "ranking on Google for our brand name". That framing was correct in 2018. It is mostly obsolete now. By Q1 of 2026, the data is unambiguous: research-phase crypto traffic increasingly starts with an AI engine, not a search engine. Users ask ChatGPT what wallet to use, ask Claude what L2 is safest, ask Perplexity for live DeFi yields. The click eventually happens, but the decision is shaped upstream.

This is why the prompts pack exists. It is the cheapest, fastest way for a founder to find out whether AI engines know their project at all, whether they recommend it, and whether the citations they do appear in are accurate.

The three failure modes

Crypto projects fail at AI citation in three ways, in roughly this order of frequency:

Invisible. AI engines do not mention your project at all. Three direct competitors get cited, you do not. This is the most common failure mode for new and mid-tier projects. The root cause is almost always thin authority signals (no llms.txt, weak schema markup, low backlink density from crypto media).

Mentioned but mid-list. You appear in lists but never as the recommended answer. The fix here is differentiation in machine-readable form: explicit superlative claims backed by data ("highest TVL by X metric", "audited by Y named firms"), structured FAQ content, llms.txt that anchors what makes you distinct.

Cited with wrong information. You get cited, but the information is stale or wrong. This is the most fixable failure mode and the one with the biggest reputational cost. The fix is mechanical: get your facts into JSON-LD, into llms.txt, into your sitemap, into your docs. AI engines will eventually re-index and the citation accuracy improves.

Why the prompts are grouped by vertical

Generic crypto prompts give generic answers that favor incumbents. "What is the best DEX" cites Uniswap and PancakeSwap because those are the obvious answers AI engines have seen the most. Vertical-specific prompts ("best DEX on Solana", "DEX with cross-chain swaps", "DEX with limit orders") surface a wider range of projects because they require the AI to disambiguate. If your project leads a vertical, you should rank in the vertical-specific prompts before you rank in the generic ones.

How this pairs with the rest of the AEO stack

The prompts pack is one of four tools that together cover Crawlux's AEO methodology. The llms.txt Generator builds the machine-readable site context file. The Crypto Schema Generator handles JSON-LD for individual pages. The Whitepaper AEO Scorer validates your long-form documents. The prompts pack tests the output of all three across real AI engines.

The full audit layer

Manual testing is fine for an initial baseline. For ongoing tracking, the Crawlux AI Visibility audit module runs hundreds of prompts in parallel, scores citation frequency over time, detects hallucinations, and benchmarks against your top 5 competitors automatically. Each free audit covers your domain across this module plus 7 others.

Test the prompts. Find your gaps. Then audit your live domain to fix them at the source.

What is next

Test the prompts here. Audit your live domain with Crawlux.

Manual testing done. Crawlux is our free audit tool that automates prompt testing at scale and gives you a complete report across 8 audit areas. Takes about 4 minutes. No signup, no credit card.

200+ Web3 brands audited Free tier forever ~4 minute audit 8 crypto-tuned modules