NEWWorld's first AI visibility audit tool for Web3 is live.Run free audit →
Free tool · No signup · Crypto-aware presets

llms.txt generator for crypto projects. AI-readable site context in 90 seconds.

Build the llms.txt file ChatGPT, Claude, Perplexity and Gemini use to understand your project. Pre-loaded with crypto-aware sections for tokenomics, audit firms, contract addresses and authority sameAs URLs. Generates both llms.txt index and llms-full.txt long-form. Built to pair with the Crawlux AI Visibility audit module.

Free · No signup · 8 crypto project presets

// The generator

Pick your project type. Fill the fields. Copy the output.

Crypto-native templates. Validation runs live. Output drops in at /llms.txt and /llms-full.txt on your domain.

// Project type

// Project basics

Plain English. What you do, in one sentence.

// Authority URLs

// Crypto-specific

Audit firms that have reviewed your contracts. Named firms = stronger AEO signal.

Where to upload: Save as llms.txt at the root of your domain (e.g. https://yourproject.com/llms.txt). Same location pattern as robots.txt.

Want the full picture?

Run a free Crawlux audit of your live domain

llms.txt is one signal. Crawlux is our free audit tool — it scans your full domain and gives you a complete report measuring actual AI citation rate across ChatGPT, Claude, Perplexity and Gemini, plus 7 other Web3-tuned audit areas. Takes about 4 minutes. No signup, no credit card.

200+ Web3 brands audited · No credit card · No setup

// How it works

Three steps. About 90 seconds end to end.

No signup. No data leaves your browser. Output is a plain text file.

01

Pick your project type

DEX, wallet, L1, L2, DeFi protocol, NFT marketplace, RWA tokenization, or generic Web3. Each preset loads crypto-aware sections specific to that project type with the right contract address fields, audit firm prompts and authority URL slots.

02

Fill in your project details

Project name, summary, primary chain, contract address, audit firms, key URLs (docs, GitHub, CoinGecko, Etherscan, whitepaper, tokenomics page). The generator structures all of this in valid llms.txt markdown format.

03

Copy or download both files

Get llms.txt (index) and llms-full.txt (long-form with content inline). Upload to the root of your domain. AI engines like ChatGPT, Claude, Perplexity and Gemini will use these to understand your project context when answering user questions.

// Why llms.txt matters now

AI engines decide who gets cited. llms.txt is part of how.

Six things this file does for a crypto project that nothing else covers.

AI-readable site context

llms.txt is the semantic answer to "what is this site about?" written for LLMs, not search engines. ChatGPT, Claude, Perplexity and Gemini increasingly use it when summarizing or recommending projects. Without it, AI engines rely on web crawl + their own inference, which is fuzzier and less reliable.

Disambiguation for confusable names

If your project shares a name with another crypto project or a non-crypto entity, llms.txt is where you disambiguate. Anchor the summary, the blockchain, the contract address. AI engines that read it will not confuse your AAVE with another AAVE.

Authority URL declaration

sameAs URLs to CoinGecko, Etherscan, your GitHub and your docs site, all declared in one place. AI engines cross-reference these to verify your project is real and to extract structured facts (audit firm names, total supply, contract address). Higher density of authority sameAs = higher citation rate.

Two-file pattern (index + long-form)

llms.txt is the high-level index. llms-full.txt is the long-form version that embeds the actual content inline so AI engines can read it without crawling. For crypto projects with technical docs, both files together cover most AI engine query patterns. The generator outputs both at the same time.

Crypto-specific sections

Generic llms.txt generators do not know about contract addresses, audit firms, tokenomics or whitepapers. The Crawlux generator templates handle all of this. The output reads like a project that knows it lives in Web3, not a marketing brochure that happens to mention blockchain.

Pairs with the AI Visibility audit

llms.txt is one signal. The Crawlux AI Visibility audit module measures citation frequency across ChatGPT, Perplexity and Claude using real prompts. Generate the file here, ship it, then run the audit to see how your citation rate moves over the following weeks.

// Common questions

Common questions about llms.txt for crypto

Patterns from crypto founders, dev teams and TG3 client onboarding calls.

What is llms.txt and why does my crypto project need one?

llms.txt is a markdown-based file at the root of your domain (yourdomain.com/llms.txt) that tells AI engines what your site is about, what is important on it and how to interpret your content. Similar idea to robots.txt for search crawlers, but specifically for LLMs. For crypto projects in 2026 it matters because ChatGPT, Claude, Perplexity and Gemini increasingly use these files to decide how to cite projects in answers.

Is llms.txt an official standard?

llms.txt is a proposed standard pioneered by Jeremy Howard and Answer.AI. It is not yet a W3C or IETF formal standard, but adoption is accelerating across documentation sites, AI infrastructure projects and content-heavy sites. Major LLM providers have indicated they will respect it. For crypto projects, getting it deployed early is a first-mover advantage on AI citation.

How is llms.txt different from robots.txt?

robots.txt tells crawlers what they can and cannot crawl. llms.txt tells AI engines what your site is about and what is important. They serve different purposes and should coexist. You still need a robots.txt that allows GPTBot, ClaudeBot, PerplexityBot and Google-Extended (the Web3 Robots.txt Checker validates this). llms.txt sits on top of that as the semantic layer.

Where do I put the file on my server?

Upload to the root of your domain so it is accessible at yourdomain.com/llms.txt. Same location pattern as robots.txt, sitemap.xml and favicon. Use the same path for all subdomains if you want them included, or generate separate files for app.yourdomain.com if the app context is different from the marketing site.

Should I generate llms.txt or llms-full.txt?

Both. llms.txt is the index file with high-level sections and links to detail. llms-full.txt is the long-form version that includes the actual page content inline so AI engines can read it without crawling. For crypto projects with technical docs, both is the right answer. The generator outputs both at the same time.

What sections should a crypto project llms.txt include?

At minimum: project summary, primary blockchain and contract addresses, audit firms with audit URLs, docs site, GitHub repo, CoinGecko and Etherscan links, tokenomics page, whitepaper, key API endpoints if you have a dApp, social handles. The generator templates handle all of this based on project type.

Will llms.txt help my project get cited by ChatGPT?

It is one of multiple signals. The bigger factors are: schema markup that AI engines can parse, FAQ-shaped content, authority sameAs URLs to CoinGecko and Etherscan, and consistent factual claims across your site. llms.txt makes the project-summary picture clearer for AI engines, especially for projects with sprawling documentation. The Crawlux AI Visibility audit module scores all of these signals together against your live domain.

Does llms.txt expose information I should keep private?

No, llms.txt only references public information you already publish on your site. Do not include API keys, private endpoints or unreleased product details. Treat it the same way you treat your sitemap.xml or robots.txt: as a public-facing roadmap of your public content.

// The shift

Why llms.txt is the underweighted AEO lever in 2026

The thing that broke for crypto SEO in 2025 was assumption that Google traffic is the only thing that matters. By Q4 of 2025, the data was unambiguous: ChatGPT, Perplexity and Claude collectively send more crypto-research traffic to projects than Google does for top-funnel queries like "best DeFi protocol" or "is X token safe". Google still owns the click for transactional intent (swap on, buy at). The research happens upstream now, in AI chat.

llms.txt did not exist in any meaningful adoption form before 2024. By mid-2026 it is one of the few mechanical levers that distinguishes projects that get cited from projects that do not. The math is straightforward: if your llms.txt is missing, AI engines fall back to web crawling and probabilistic inference. If it exists and is well-structured, AI engines treat your declared facts as canonical.

What the file actually changes for crypto

Three things measurably move when crypto projects deploy llms.txt:

Disambiguation rate. AI engines stop confusing your project with similarly-named projects on different chains. The contract address and blockchain declaration in llms.txt is the deciding signal.

Audit firm citation. Naming Trail of Bits, OpenZeppelin, Spearbit and Halborn in machine-readable form means AI engines reproduce these names verbatim when asked about your security posture. Generic crawl of your audit page often misses the firm names.

Authority URL traversal. AI engines follow sameAs links to CoinGecko, Etherscan and your GitHub to verify your declared facts. The denser your authority URL set in llms.txt, the higher the verification rate, the higher the citation rate.

The two-file pattern matters more than founders realize

llms.txt is the index. llms-full.txt is the inline-content version. The argument for shipping both: AI engines crawl differently. Some respect rate limits and follow links from llms.txt to your docs. Others want everything in one file they can ingest in one request. Shipping both covers both behaviors. The generator produces both at the same time precisely because the marginal cost of the second file is zero.

How this pairs with the rest of the AEO stack

llms.txt is one signal. The AI Visibility audit module measures citation frequency end to end. The Crypto Schema Generator handles JSON-LD for individual pages. The Whitepaper AEO Scorer validates your long-form documents. The Web3 Robots.txt Checker confirms AI bots can crawl what they need to. Each tool is narrow. Use them together for full AEO coverage.

The compounding effect

Projects that ship llms.txt in Q1 of 2026 will compound their AI citation rate over Q2 and Q3 as AI engines learn to trust their declared facts. Projects that wait until 2027 are starting from zero against competitors who already have 6 to 9 months of accumulated trust signals. Same dynamic that played out with schema markup in 2014 and Featured Snippets in 2017. Early movers won, late movers played catch-up.

Generate the file here. Upload it. Then audit your live domain to see how the rest of your AEO posture stacks up.

What is next

Generate llms.txt here. Audit your live domain with Crawlux.

File generated. Crawlux is our free audit tool that scans your full domain and measures whether it actually moves your AI citation rate, plus 7 other audit areas. Takes about 4 minutes. No signup, no credit card.

200+ Web3 brands audited Free tier forever ~4 minute audit 8 crypto-tuned modules