Inside Crawlux week-one GA: 56 audits, 4.83/5 satisfaction and what changed from beta
Detailed analysis of the first 56 GA audits. Vertical mix, satisfaction breakdown, top fixes that held from beta, two new failure modes spotted and the feature requests now shaping Q2 roadmap.
The 56-audit cohort
Between April 13 and April 20, 2026, Crawlux Pro ran 56 new audits as part of the GA launch week. Vertical mix: DeFi 18 audits (32%), staking and restaking 9 (16%), NFT marketplaces and game-fi 8 (14%), wallets 7 (12%), infrastructure 6 (11%), stablecoins 4 (7%), layer-1 and layer-2 chains 4 (7%). The mix matches the broader crypto SEO market with slight overweighting toward DeFi.
Combined with the 47 beta audits delivered between February 24 and April 13, the total audit count crossed 103 in week one of GA. The shift from 47 to 103 in one week reflects pent-up demand from teams who had waited for GA pricing rather than committing to alpha-period rates. The companion week-one GA report covers the headline numbers.
Pricing tier mix: 41 audits at the $25 Pro tier (73%), 12 at the $49 Team tier (21%) and 3 enterprise inquiries that required custom scoping (5%). The Team tier surprised on the upside; pre-launch projections expected sub-10% Team adoption. Multi-domain audits and white-label PDF exports drove the larger-than-expected uptake from agency buyers.
The 4.83 out of 5 satisfaction breakdown
Each audit completion triggers a 3-question post-audit survey. Question 1 covers report clarity (was the audit report easy to read and act on). Question 2 covers actionability (could you ship the fixes within 30 days). Question 3 covers fix-list specificity (were the recommendations specific enough to implement without follow-up clarification). Responses on a 1 to 5 scale.
Response rate during week one: 76% (43 of 56 audits surveyed). Average scores across the three dimensions: 4.83 on clarity, 4.91 on actionability, 4.72 on specificity. The specificity gap (4.72 vs 4.91 on actionability) is the most important data point. It indicates that teams can act on the audit but sometimes need to ask follow-up questions to fully understand what to ship.
Methodology refinements coming in Q2 to close the specificity gap. First: per-finding example code snippets where applicable. Second: linked references to specific documentation sections (schema.org pages, CDN provider docs) instead of generic descriptions. Third: per-finding effort estimates so teams can scope the work before committing.
Top findings that held from beta
The beta-cohort top-3 fix list replicated cleanly in GA. Fix 1: token schema migration from Product to FinancialProduct. GA cohort median impact: 11.4% AI citation rate improvement within 30 days, matching beta's 11.7%. Fix 2: robots.txt update to allow AI bots. GA cohort median impact: 4.0 points of AEO score lift within 14 days, matching beta's 4.1.
Fix 3: audit firm citations with linked reports. GA cohort median impact: 3.0 points of AEO score lift within 30 days, slightly below beta's 3.2 but within statistical noise. The replication across a different cohort (different vertical mix, different size distribution, different geography) suggests the audit methodology generalizes beyond beta-cohort selection bias.
The implementation discipline finding also replicated. Sites that shipped the top 5 recommendations within 14 days saw median AEO score lifts of 16.2 points. Sites that shipped 1-2 recommendations saw 7.8 points. The audit identifies the fix list; ship rate determines the outcome.
Two new failure modes spotted in GA
New failure mode 1: middleware-based bot intercepts on Next.js sites. 7 of the 56 GA-cohort sites had middleware.ts files at project root that intercepted AI bot user-agent strings for "performance" reasons (bot rate limiting, geo-based redirects). The robots.txt allowed the bots. Middleware blocked them anyway. The Web3 Robots.txt Checker did not catch this in beta because the beta-cohort skewed away from Next.js sites. Detection now added to the AI Visibility audit module.
New failure mode 2: GraphQL endpoints serving structured data that AI engines cannot parse. 4 of the 56 GA-cohort sites served their token data via GraphQL with no parallel HTML/JSON-LD representation. AI engines parse the HTML page first; if the structured data lives only in a GraphQL response that requires authentication or query construction, the data is invisible. The fix: dual-publish to JSON-LD embedded in the HTML page in addition to GraphQL availability. Detection now added to the Schema Audit module.
Feature request prioritization
Three top feature requests surfaced in week one. First: bulk multi-domain audits for agencies managing multiple client portfolios in one workspace. Second: Slack integration that pings on audit completion with score and top findings as a thread starter. Third: automatic re-audit triggers when site changes (sitemap updates, schema modifications, robots.txt drift).
Prioritization methodology: each feature is scored on user demand (weighted by tier; Team tier requests count 2x Pro tier requests because Team users are the heaviest workflow integrators), engineering effort (story points across known dependencies) and methodology fit (does the feature serve the open-methodology vision or pull toward platform lock-in).
Bulk multi-domain audits ranks first (high demand, moderate effort, full methodology fit). Slack integration ranks second (moderate demand, low effort, full methodology fit). Auto re-audit triggers ranks third (high demand, high effort because of webhook system requirements, full methodology fit). The first two ship by end of April. The third lands in mid-Q2 2026.
Q2 roadmap impact
The v2 methodology release at end of Q2 picks up two of the new GA-cohort findings. Crypto Geo-Regulatory Targeting (hreflang and jurisdictional content fit) addresses the multi-language ranking gap surfaced by 9 of the 56 GA-cohort sites operating across Mandarin-English markets. Multi-Chain Documentation Indexability addresses the GraphQL-only data publishing pattern by validating that documentation pages have crawlable HTML representations.
On-Chain Transfer Velocity, the third v2 module, was already in development before GA. It adds token transfer patterns as a topical authority signal weighting input. The module addresses the case where two protocols have similar surface signals but different actual usage; on-chain velocity differentiates them in the authority scoring.
Q3 2026 adds the Crypto Schema Library covering 14 schema types beyond FinancialProduct. Both v2 and Q3 releases publish open documentation at crawlux.com/blog/crawlux-methodology before going live in the platform, in line with the open-methodology commitment covered in the methodology publication press release.
Take
Beta findings replicated in GA cleanly. Token schema migration is still the highest-impact fix. Robots.txt for AI bots is still second. The audit methodology generalizes beyond the beta cohort.
// Related
Crawlux is the world's first automated SEO audit tool built for Web3, DeFi and blockchain. The platform runs 23 analyzers across 6 check groups including AI visibility testing across ChatGPT, Perplexity and Claude. Free tier available. Paid tiers from $25 per audit. More at crawlux.com.
Frequently asked questions
Does the 4.83/5 score include unhappy customers who did not respond?
The score reflects 43 of 56 surveyed audits. The 13 non-respondents are unknown sentiment. Industry norms suggest non-respondents skew slightly negative; even adjusting for that, the underlying satisfaction is high.
How did Crawlux verify the survey responses are not gamed?
Surveys are completed within the customer dashboard after audit delivery. The system rate-limits to one survey per audit per account. Anonymous bulk responses are not possible. The methodology is documented internally and available on request.
When do the GA-cohort findings get incorporated into the methodology?
New failure modes are added to relevant audit modules within 14 days of detection. The middleware bot-intercept check and the GraphQL-only data check both shipped to production audits before this post was published.
Will alpha participants get a price discount at GA?
Alpha participants who completed at least one beta audit get the first 3 Pro audits at no charge as GA pricing rolls out. After that, standard pricing applies.
RUN YOUR FIRST AUDIT FREE
See Crawlux on your own crypto site.
No signup, no credit card. Full Web3-tuned audit report in 60 seconds.
Free first audit · No signup · 60 seconds · Full PDF report
