NEWWorld's first AI visibility audit tool for Web3 is live.Run free audit →
Free tool · No signup · Score across 8 dimensions

Whitepaper AEO Scorer. Will ChatGPT cite your whitepaper?

Paste your whitepaper. Get a 0 to 100 AI citation readiness score across 8 dimensions in 3 seconds. The first scorer purpose-built for crypto whitepapers, mapped to how ChatGPT, Perplexity and Claude actually pick documents to cite when crypto users ask questions.

Free · No signup · Works in your browser

// The tool

Paste whitepaper text. Score updates in 3 seconds.

Works with Markdown, plain text or HTML. Minimum 100 characters. Optional URL fetch via CORS proxy (best effort, PDFs not supported).

Your score will appear here

Paste a whitepaper on the left and click Score now. We score across 8 dimensions and return a verdict in under 3 seconds.

Want the full picture?

Run a free Crawlux audit of your live domain

The scorer reviews one document. Crawlux is our free audit tool — it scans your full domain and gives you a complete report covering schema, AI visibility, technical SEO, backlink quality and 4 more areas. Takes about 4 minutes. No signup, no credit card.

200+ Web3 brands audited · No credit card · No setup

// How it works

Three steps. Score in under 3 seconds.

Runs entirely in your browser. Text is never sent to a server.

01

Paste the whitepaper

Copy-paste the full text from your whitepaper. Works with Markdown, plain text or HTML. Minimum 100 characters. The tool also accepts URL input (best effort, many sites block CORS proxies).

02

See the radar score

The 0 to 100 overall score appears with a radar chart across 8 dimensions: heading structure, definition density, citations, named entities, FAQ structure, factual density, AI readability, structural markers. Tiers from Poor (under 40) to Excellent (80 plus).

03

Apply the recommendations

Each sub-dimension below threshold gets a specific recommendation with the exact fix. Download the report as Markdown, export the radar chart as PNG, share with the team.

// What the score measures

8 dimensions of whitepaper AI citation readiness

Each dimension is derived from observed patterns in how ChatGPT, Perplexity and Claude pick documents to cite when crypto users ask about projects.

Heading structure (max 15)

One clear H1. Three to eight H2 sections. H3 sub-sections where structure justifies. AI models chunk documents by heading hierarchy. Missing H1, deep nesting without parent H2s and inconsistent heading levels all degrade citation likelihood. The same H1/H2/H3 rules apply on your live domain — the Technical SEO audit covers heading hierarchy across all pages.

Definition density (max 10)

Explicit definitions in the form "X is a Y" or "X means Y" or "X refers to Y". AI citation engines look for definitions to anchor concept understanding. Define your project, your token, and 4 to 8 key technical terms explicitly within the first 25 percent of the document.

Citation count (max 10)

References to other works, audits, EIPs, research papers. Bracketed citations like [1], URLs, "according to X" phrasing all count. Crypto whitepapers that cite their audit firms by name, the standards they implement (ERC-20, EIP-1559) and the protocols they build on rank dramatically higher in AI citation engines.

Named entity richness (max 15)

Founders by name, audit firms by name (CertiK, Trail of Bits, Spearbit), partner protocols, exchanges. The tool ships with a list of 90+ known crypto entities. Documents that name specific entities get cited far more often than documents that hand-wave generic claims. The Crypto Schema Generator builds Organization JSON-LD with these same sameAs entity links for your live site.

FAQ structure (max 10)

Q&A markers (Q:, A:) and question-shaped headings (How does X work, What is X, Why does X matter). AI engines love FAQ-structured content because it maps directly to user queries. Including a dedicated FAQ section near the end of the whitepaper is the single highest-leverage AEO improvement.

Factual density (max 15)

Numbers, percentages, dollar amounts, dates, years per 1000 words. AI models cite documents with specific quantitative claims over documents that gesture vaguely. Total Value Locked figures, audit dates, supply numbers, transaction counts, geographic coverage statistics all contribute.

AI readability (max 15)

Average sentence length (target 15 to 22 words), average paragraph length (target 3 to 5 sentences), Flesch reading ease (target 50 to 70). Sentences over 30 words and paragraphs over 5 sentences depress citation likelihood because AI chunking algorithms struggle to extract clean passages.

Structural markers (max 10)

Table of Contents at the top, bullet and numbered lists, code blocks, tables. Each helps AI parsers chunk content into citable segments. PDFs that are scanned images or screenshots fail entirely. Markdown source with tables and lists scores highest.

How this connects to your live domain

The whitepaper score is one document. The AI Visibility audit module tests how the AI engines (ChatGPT, Perplexity, Claude) actually cite your live domain across multiple prompts and dimensions. The whitepaper score is the source document quality. The audit is the citation outcome. Both matter.

// Common questions

Common questions about whitepaper AEO

Patterns from founder DMs, marketing-team office hours and TG3 client onboarding calls.

What is AEO and how is it different from SEO?

AEO is Answer Engine Optimization. SEO optimizes for search engines like Google. AEO optimizes for AI engines like ChatGPT, Perplexity and Claude. The two overlap but are not identical. SEO weights backlinks, page speed and keyword targeting. AEO weights structured data, entity disambiguation, FAQ density and citation-friendly content patterns. The AI Visibility audit module measures AEO performance across multiple AI engines.

How does this tool decide what scores high?

The scoring rules are derived from observed citation patterns across ChatGPT, Perplexity and Claude on crypto queries. Documents with clear heading hierarchy, explicit definitions, named crypto entities, FAQ structure and high factual density consistently get cited more often than documents lacking these. The score is a heuristic, not an oracle. A 90 score does not guarantee citation. A 30 score almost guarantees no citation.

Can the tool fetch my whitepaper from a URL?

It attempts to via a public CORS proxy. Many sites (especially those behind Cloudflare) block this. PDFs cannot be parsed client-side. The reliable path is to paste the text directly. We may add backend URL fetching in v2.

My whitepaper is a PDF. Will this work?

Not directly. Copy the text from your PDF and paste it into the textarea. The text-only version of your whitepaper is what AI models can actually read anyway, so if you can copy-paste it, that is your AEO surface. If your whitepaper is a scanned image PDF (no text layer), AI models cannot read it at all and your AEO score is functionally zero.

Should I include an FAQ section in my whitepaper?

Yes. This is the highest-leverage single change. A dedicated FAQ section with 5 to 12 question-shaped headings near the end of the whitepaper boosts every AI citation engine simultaneously. Wrap your FAQ in FAQPage JSON-LD too (use the Crypto Schema Generator) so the same FAQ surfaces in Google rich results.

How long should my whitepaper be?

Modern crypto whitepapers run 8 to 25 pages or 3000 to 8000 words. Longer than that, AI engines tend to chunk and cite only specific sections, so structural markers (headings, TOC, bullet lists) become more important. Shorter than 1500 words, the document does not have enough factual density to be cited reliably.

What is the difference between this and a generic readability score?

Generic readability tools (Flesch, Gunning-Fog) only measure sentence and word complexity. This scorer also measures crypto-specific entity richness, audit firm naming, citation count, FAQ structure and AI parsing markers. Generic readability is a subset of one dimension (AI readability, max 15 points) in the full 100-point score.

Does scoring my whitepaper here actually improve AI citations?

The tool diagnoses. The improvements (rewriting sections, adding FAQs, naming entities, restructuring headings) require effort on your part. After you ship the improved whitepaper to your domain, run the AI Visibility audit on your live site to see the actual citation frequency change. Most projects see measurable AEO improvement within 2 to 6 weeks of fixing whitepaper structure, once AI engines re-crawl.

// The citation gap

Some whitepapers get cited by ChatGPT. Most do not. The pattern is mechanical.

The biggest underweighted shift in crypto marketing over the last 18 months is that AI engines have become the first stop for project research. A founder hears about a protocol, opens ChatGPT, asks "what is X protocol and how does it differ from Aave". The AI engine cites three to five sources. The projects in those citations get attention. The projects not cited do not exist for that user.

The citation game is not a popularity contest. It is a parsability contest. AI engines cite documents that they can chunk cleanly, extract entities from, and quote in support of an answer. Whitepapers that fail the parsability test simply do not get cited, no matter how good the protocol behind them is.

What AI engines actually need from a whitepaper

Clear heading hierarchy because chunking algorithms split documents on headings. Explicit definitions because entity-disambiguation algorithms need anchors. Named entities because cross-reference graphs need nodes. Specific numbers because factual claims need grounding. FAQ structure because question-answer pairs map directly to user queries. Each of these is mechanical and fixable.

The crypto-specific layer

Generic readability scoring (Flesch, Gunning-Fog) misses everything that matters for crypto-specific AI citation. Whether you name your audit firm by name (Trail of Bits, OpenZeppelin, Halborn) is far more important than your average sentence length. Whether your tokenomics page cites EIP-1559 by EIP number is more important than your subject-verb agreement.

The Whitepaper AEO Scorer ships with 90+ known crypto entities baked into its named-entity detection. Aave, Compound, Uniswap, Chainlink, MakerDAO, EigenLayer, the audit firms, the standards, the data sources. Documents that name these entities by name get higher citation scores because AI engines use entity graphs to verify document authority.

How the score maps to actual citation outcomes

Documents that score 80 plus tend to be cited reliably in their topic area. Documents in the 60 to 79 band are cited sometimes, usually when the user's query is specific enough to make the document the obvious source. Documents under 60 are cited rarely, if at all. The correlation is not perfect, because backlinks and domain authority also matter. But the whitepaper structure is the floor you set on your own AEO performance.

The fixes are mechanical

Adding 8 question-shaped headings to a dedicated FAQ section moves the score by roughly 8 to 10 points. Restructuring deep heading nesting into clean H1/H2/H3 hierarchy moves it by 5 to 10 points. Naming the audit firms and partner protocols by name moves it by 4 to 8 points. Each of these takes a writer one to two hours. None require new product work.

The downstream effect on your domain AEO

The whitepaper is one citable document. The full domain has many. The AI Visibility audit module tests how AI engines cite your live domain against your competitors across 12 standard crypto prompts. The whitepaper structure is one input. The structured data on your live pages is another input. The robots.txt allowing AI bots is another (use the Web3 robots.txt checker to verify). The sameAs URLs on your Organization schema is another (use the Crypto Schema Generator).

The score on this page is your first leverage point. Run it, fix the low scores, re-run it. Get to 80 plus before you publish the next version of your whitepaper. Then run the full AI Visibility audit on the live domain to verify the citation frequency lift. The work compounds.

What is next

Score one whitepaper here. Audit your whole domain with Crawlux.

Whitepaper covered. Crawlux is our free audit tool that scans your full domain and gives you a complete report covering 8 audit areas including schema, AI visibility and technical SEO. Takes about 4 minutes. No signup, no credit card.

200+ Web3 brands audited Free tier forever ~4 minute audit 8 crypto-tuned modules