E-Commerce Intelligence — Updated April 2026

LLM Attribution for E-Commerce: Which Models Drive Traffic & Sales

ChatGPT holds 97% of LLM-referred e-commerce sessions. Perplexity buyers spend 57% more per order. Meta AI has 1 billion users and zero trackable referrals. Most attribution setups miss all of it.

SearchTides • Derek Iwasiuk • April 2026 • 17 min read

Derek Iwasiuk

Derek Iwasiuk

Founder & CEO, SearchTides

April 2026 17 min read in
✓ Fact-Checked by SearchTides Research Last updated: April 22, 2026

Cite This Page

Iwasiuk, D. (2026, April). LLM attribution for e-commerce: Which models drive traffic & sales. SearchTides. https://searchtides.com/llm-attribution-for-e-commerce-which-models-drive-traffic-amp-sa/

AI-driven ecommerce traffic grew 4,700% year-over-year in 2025. The referral picture now spans six major platforms, each sending different buyers at different price points. Last-click attribution captures none of it.


Key LLM E-Commerce Statistics (2025–2026)

The data below aggregates findings from multi-brand e-commerce cohort studies, platform disclosures, and third-party analytics firms covering 2025 and early 2026.

4,700%
YoY growth in AI-driven ecommerce traffic in 2025
97%
of all LLM ecommerce sessions originated from ChatGPT in 2025
31%
higher conversion rate for LLM-referred traffic vs. traditional channels
57%
higher average order value for Perplexity users vs. ChatGPT users
2.69%
LLM channel engagement rate — 2nd highest of any e-commerce channel
$8.65B
global AI ecommerce market in 2025, projected to exceed $50B by 2033

What the Numbers Mean for E-Commerce Teams

  • ChatGPT controls referral volume. Perplexity controls referral margin. They need different strategies.
  • LLM buyers arrive pre-researched. The decision is largely made before they click. Your job at that point is confirming, not convincing.
  • With Gemini at 21.5% and Grok at 17.8% U.S. mobile share, treating ChatGPT as the only LLM worth optimizing for is already a mistake.
  • Last-click attribution misses 60–70% of LLM influence. Most brands are making optimization calls with a broken ruler.

The LLM Landscape: Who Has the Audience

You can’t do LLM attribution without knowing where the audience is. ChatGPT’s web traffic share fell from 86.7% to 64.5% in 2025 as Gemini moved from 5.7% to 21.5% and Grok went from nothing to 17.8% of the U.S. mobile chatbot market.

LLM Platform Monthly Active Users (Early 2026)

ChatGPT ~1.0B MAU Google Gemini 750M MAU DeepSeek 130M MAU Grok (xAI) 60M MAU Perplexity 30M MAU Claude(consumer) 19M MAU

Sources: OpenAI, TechCrunch, Business of Apps, DemandSage • Early 2026 estimates. Claude figure = consumer-facing MAU; enterprise API users substantially higher.

Google Gemini (750M MAU) is a real rival to ChatGPT (~1B MAU) on raw audience size, backed by Google Search, Gmail, and Workspace integration. DeepSeek’s 130M MAU sounds large until you check where the users are: 39% in China, concentrated in Southeast Asia. For most Western brands, it doesn’t register yet.

AI Chatbot Web Traffic Market Share (Jan 2026)

ChatGPT 64.5% Google Gemini 21.5% Perplexity 6.2% Grok 3.4% Claude 3.2% Other 1.2%

Source: First Page Sage, AI Multiple, StatCounter • Jan 2026. ChatGPT share declined from 86.7% in Jan 2025 as Gemini and Grok gained ground.

Web traffic share tells a different story. ChatGPT is still dominant at 64.5% but shed 22 percentage points in a single year. That fragmentation will keep going. More users have multi-platform habits now: Gemini at work, ChatGPT at home, Grok while scrolling X. Any attribution setup assuming one LLM per buyer is already wrong.

Chatbot Market Share Shift: Jan 2025 → Jan 2026

ChatGPT 64.5% ↓ from 86.7% Google Gemini 21.5% ↑ from 5.7% Perplexity 6.2% ↑ from ~3% Grok (xAI) 3.4% ↑ from ~0% Claude 3.2% ↑ from ~2%

Bar length = Jan 2026 share. Labels show direction and Jan 2025 starting point. Source: First Page Sage, AI Multiple • 2026.

Grok is the number to watch. It went from a rounding error to real share in under 12 months, almost entirely on X (Twitter) integration. For brands whose buyers live on X — finance, tech, culture — this is no longer something you can dismiss.


Which LLMs Actually Drive E-Commerce Traffic?

Chatbot audience share and e-commerce referral share are two different numbers. Despite Gemini’s growth, ChatGPT still dominates actual product referrals. That gap makes sense: Gemini users skew toward productivity tasks — drafting documents, Workspace integration. Shopping research is a smaller part of how they use it.

Share of LLM-Originated E-Commerce Referral Sessions

ChatGPT 92.0% Perplexity 4.1% Google Gemini 2.6% MS Copilot 2.1% Other LLMs 0.2%

Source: Alhena AI, MaximilianKaiser.org, Search Engine Land • 2025 aggregate across 329+ e-commerce brands. ChatGPT share varies 90–97% by study.

ChatGPT’s 92% share of LLM e-commerce referrals (up to 97% in some studies) comes from how dominant it is in consumer product research. Perplexity’s 4.1% is small but pays well, as the AOV chart shows. Gemini’s 2.6% should grow as Google deepens Shopping integration — worth tracking through 2026.

Average order value: where the margin is

Perplexity buyers have done real research before clicking. They know what they want, have compared alternatives, and are past the browsing phase. That shows up in the order value.

Average Order Value (AOV) by Referral Source

Perplexity $320 Organic Search $238 ChatGPT $204

Source: SearchTides analysis, Alhena AI, HockeyStack Labs • 2025–2026. Perplexity AOV derived from reported 57% premium over ChatGPT.

The $204 ChatGPT AOV versus $238 organic is telling. ChatGPT buyers tend to be earlier in the decision cycle than someone searching a specific product term. Perplexity’s ~$320 AOV (57% above ChatGPT) reflects a different buyer: more likely older, higher income, and willing to pay more once research has confirmed the value. They’re not hunting for a deal. They’ve already decided to buy something worth buying.


How LLM-referred buyers behave on-site

LLM-referred buyers behave differently from search or social referrals. You can measure it.

E-Commerce Channel Engagement Rate (Purchases / Sessions)

SMS 4.43% LLM / AI Referral 2.69% Email ~2.30% Google Shopping ~1.90% Google Ads ~1.70% Social Media ~0.90%

LLM engagement rate = observed. Email, paid, social = typical industry benchmarks for context. Source: Alhena AI • 2026 LLM E-Commerce Benchmarks.

At 2.69%, LLM referrals rank second across all e-commerce channels, behind SMS (4.43%) and ahead of email, Google Shopping, Google Ads, affiliate, and all social. The volume is small. The quality per session is better than almost everything else in the mix.

Where AI-Referred vs Google-Referred Shoppers Land

LLM — Product Detail Pages 77.0% Google — Product Detail Pages 60.0%

Source: Alhena AI, Search Engine Land • 2025. AI-referred buyers arrive 28% more often at high-intent PDPs, having completed research inside the LLM.

The PDP landing rate shows where buyers are in the funnel. When 77% of AI-referred visitors land on product pages versus 60% from Google, the AI already did the discovery work. The buyer asked “what’s the best wireless headphone for commuting under $150?” and arrived at your product page with an answer. You’re confirming a purchase decision, not making a sale.

LLM Buyer Behavioral Signature

  • 77% land on product detail pages vs. 60% for Google-referred traffic (higher purchase intent)
  • 32% longer average time on-site compared to other referral channels
  • 47% faster purchase completion cycle once on-site
  • 27% lower bounce rate than the channel average
  • LLM referral traffic grew 3× from January to December 2025, then contracted 42.6% from its July peak. The volatility is structural, not seasonal.

Model-by-model e-commerce profiles

These profiles are based on early 2026 data. The numbers will change.

ChatGPT (OpenAI)

MAU~1B (Jan 2026)
E-com referral share90–97%
Avg. Order Value$204
Black Friday 2025 CR12% (vs 7% Google)
Buyer profileBroad, earlier-stage
Best forVolume & acquisition

Perplexity

MAU~30M (2026)
E-com referral share4.1%
Avg. Order Value~$320 (+57% vs ChatGPT)
Monthly queries780M (May 2025)
Buyer profileResearch-forward, premium
Best forMargin & premium products

Google Gemini

MAU750M (Feb 2026)
E-com referral share2.6% (growing)
Share growth (1yr)5.7% → 21.5%
Key integrationGoogle Shopping, Workspace
Buyer profileGoogle-ecosystem users, B2B
Best forProduct schema, Shopping feeds

Grok (xAI)

MAU60–64M (late 2025)
U.S. mobile share growth1.6% → 17.8% (1 year)
Monthly web visits299M (Feb 2026)
Audience skewX / finance / tech users
Buyer profileTech-forward, early adopters
Best forBrand authority signals

Claude (Anthropic)

Consumer MAU~19M (Jan 2026)
Enterprise customers300,000+ businesses
Fortune 100 penetration70% of Fortune 100
Buyer profileEnterprise, developer, B2B
StrengthLong-context reasoning
Best forComplex B2B procurement

DeepSeek

MAU130M (end of 2025)
Top marketsChina (39%), India, Indonesia
Downloads173M since Jan 2025
Western e-com relevanceLow (2025), watching 2026
Buyer profileAPAC, cost-conscious
Best forAPAC market expansion

What to do with each platform

ChatGPT is your acquisition channel. At that scale, marginal improvements in how ChatGPT represents your brand have real revenue impact. The limitation is AOV — ChatGPT buyers tend to be earlier in the decision cycle. Focus on product narratives, press coverage, and review volume.

Perplexity is your margin channel. Those buyers have already seen your product cited against alternatives, with sources attached. Structured data (Product schema, PriceSpecification, AggregateRating), specs, and research content Perplexity can cite are what move the needle here. Perplexity conversions tend to be your highest-AOV customers from any LLM.

Gemini made the biggest single-year move of any chatbot in 2025 (5.7%→21.5%). Google Shopping integration gives it a structural e-commerce advantage no other model has. If you’re already running Shopping, you’re partway there — but schema quality matters more than it used to, and the gap between clean and messy structured data is growing.

Grok grew 9.5× in U.S. mobile share (1.6%→17.8%). That’s X platform integration doing the work, not organic adoption. The audience skews toward finance, tech, and culture. For brands in those verticals, Grok brand visibility is worth taking seriously now. For most product categories, add it to your monitoring list and revisit in six months.

Claude’s 19M consumer MAU undersells the actual footprint. It runs inside 70% of the Fortune 100 and 300,000+ businesses. For B2B e-commerce and complex procurement, Claude handles the research tasks other models fumble — long comparison documents, vendor checklists, spec-heavy RFPs. If your buyers work at large companies, Claude is probably somewhere in their workflow.

DeepSeek doesn’t register for Western e-commerce right now. The 130M MAU are mostly in China, India, and Indonesia. If you’re building in APAC, add it to your monitoring stack. Otherwise, it can wait.


The 5 LLMs worth actively monitoring

Not every platform deserves equal tracking effort. Copilot’s attribution layer produces unusable data in major AI monitoring tools. Meta AI routes traffic through Facebook, Instagram, and WhatsApp, showing up in GA4 as “direct” or “social” and never as an LLM referral. DeepSeek is largely irrelevant for Western e-commerce in 2025. That leaves five platforms worth building a real monitoring stack around.

Platform Monitor? Why How to track
ChatGPT Yes — primary 90–97% of LLM referral sessions; biggest optimization leverage GA4 channel: chat.openai.com + chatgpt.com
Perplexity Yes — margin priority ~$320 AOV; citation-heavy, fully trackable; automotive at 93% GA4 channel: perplexity.ai
Gemini Yes — growth watch Fastest-growing chatbot (5.7%→21.5%); Shopping integration changing referral math GA4 channel: gemini.google.com + bard.google.com
Grok Yes — vertical dependent 17.8% U.S. mobile share; Fairing data shows leading platform in Apparel (15.8%) and Food & Drug (13.5%) GA4 channel: grok.com; AI monitoring partial
Claude Yes — B2B and complex purchases 33% of Consumer Electronics attributions (Fairing Q4 2025); Enterprise footprint; long-context research tasks GA4 channel: claude.ai
Copilot GA4 only Sends ~2.1% of LLM referrals in aggregate studies, but AI monitoring attribution is broken — 0% citation and mention detection in monitoring tools GA4 referral: bing.com (shared with Bing Search); monitoring tools unreliable
Meta AI Surveys only ~1B MAU but zero trackable referrals; traffic routes through Facebook/Instagram/WhatsApp as “direct” or “social” Post-purchase surveys (Fairing, Wonderment); invisible to GA4 and monitoring tools
DeepSeek Skip for now 130M MAU concentrated in China/APAC; negligible Western referral traffic in 2025 Add if expanding into APAC markets

Where Fairing and referral data diverge by category

Fairing’s Q4 2025 post-purchase survey data (1,000+ brands) shows platform dynamics that referral tracking misses. By December 2025, 21.2% of brands in the cohort reported LLM-related mentions in survey responses — roughly double the level at the start of the year. The category breakdowns are where it gets interesting.

LLM Attribution Share: Consumer Electronics (Fairing Q4 2025)

ChatGPT 37.0% Claude 33.0% Google Gemini 21.0% Other 9.0%

Source: Fairing Q4 2025 Post-Purchase Survey • 1,000+ brands. Self-reported attribution in Consumer Electronics. Claude’s 33% share reflects research-heavy buyers using long-context models for comparison tasks — much higher than its 3.2% web traffic share suggests.

Category Leading LLM Survey share Why it leads
Automotive Perplexity 93% of LLM mentions High-ticket buyers doing $30K+ research want cited, verifiable sources before deciding
Consumer Electronics ChatGPT / Claude 37% / 33% Casual research routes to ChatGPT; comparison-heavy spec work routes to Claude
Apparel Grok 15.8% of mentions X/social-native discovery; fashion-forward demographic overlaps with Grok’s user base
Food & Drug Grok 13.5% of mentions Real-time trend content on X surfaces food and wellness recommendations

The Consumer Electronics split is the number most referral data undersells. Claude’s consumer MAU is 19M. Its web traffic share is 3.2%. But for research-heavy purchase decisions with complex specs and competing products, it’s doing 33% of the attribution work in that category. Monitoring Claude matters far more than the MAU numbers suggest for electronics, software, and anything that requires genuine comparison work before buying.


Meta AI and Copilot: the attribution blind spots

Meta AI: 1 billion users, zero referrals

Meta AI runs inside Facebook, Instagram, WhatsApp, and Messenger. The user base is roughly one billion. The referral count in GA4 is effectively zero.

That’s a tracking architecture problem, not an influence problem. When someone asks Meta AI in WhatsApp for product recommendations and then visits your site, the session referral header comes from the social app, not from the AI. GA4 records the session as “social” (Instagram/Facebook) or “direct” (WhatsApp, iMessage). The LLM touchpoint disappears into attribution buckets you’re already crediting to paid or organic social.

AI monitoring tools don’t cover it either. Meta AI doesn’t expose its response data through the crawlable interfaces that monitoring platforms use. It’s architecturally invisible to standard tracking stacks.

Post-purchase surveys are the only methodology that captures Meta AI influence. When Fairing asks buyers how they first discovered a product and respondents say “a friend sent it on WhatsApp” or “I saw it while scrolling Instagram” — some portion of those responses is Meta AI-assisted discovery. Nothing labels it as “Meta AI” in survey data, which means the actual influence level is almost certainly undercounted in every dataset available right now.

For brands with strong Facebook and Instagram engagement, or a mobile-first audience, Meta AI’s role in discovery is likely larger than any of the five trackable LLMs. You can’t measure it directly. What you can do: keep Open Graph tags accurate, maintain Meta Catalog integration, and ensure schema markup is clean. Those signals inform Meta AI when it assembles product answers in social surfaces.

Copilot: the monitoring gap in enterprise search

Copilot appears in aggregate referral studies at ~2.1% of LLM e-commerce sessions. That’s trackable through GA4 (Bing referral domains), though it shares an attribution bucket with regular Bing search traffic. The deeper problem is AI monitoring: across 294 Copilot records in Profound’s dataset, citation detection is 0%, mention detection is 0%, and position data is 0%. The monitoring layer fails completely.

This matters for B2B and enterprise buyers, where Copilot is embedded in Microsoft 365 and used in procurement workflows. If your brand isn’t showing up in Bing’s organic index with clean structured data, you’re probably absent from Copilot’s sourced answers — and you have no reliable way to verify it through standard monitoring tools. Bing Webmaster Tools and regular structured data validation are the fallback approach until monitoring tool support improves.


Why traditional attribution fails for LLM traffic

LLM discovery breaks last-click attribution. A buyer consults ChatGPT on Monday, compares options in Perplexity on Wednesday, reads a Gemini-surfaced review on Thursday, and converts through a direct URL on Friday. Last-click credits Friday. ChatGPT, Perplexity, and Gemini get nothing.

Attribution Model Standard Use Case LLM-Specific Failure Mode
Last-Click Direct-response channels Misses 60–70% of LLM influence; treats discovery and conversion as identical touchpoints
First-Touch Awareness channel measurement Undercounts LLM’s role in intent formation and mid-funnel research acceleration
Linear (40-20-40) Multi-stage B2B journeys Assumes equal weight across touchpoints; LLM influence peaks mid-funnel, not evenly distributed
Time-Decay Remarketing & discount channels Undervalues early-stage LLM discovery that shapes entire category consideration
SearchTides AI Influence Model AI-driven discovery & commerce Captures brand positioning comprehension per model, multi-session influence, and conversion velocity

The right setup segments LLM traffic by model, maps sessions to intent stage, and assigns influence based on behavioral signals rather than click source. GA4 custom segments for each LLM, combined with server-side event tracking for product page depth, builds this picture without third-party tools.

“Attribution frameworks were built for keyword clicks. They predate conversational commerce entirely. You’re not dealing with a data problem — you’re dealing with a model problem.”

GA4 referral data versus post-purchase surveys

GA4 referral tracking and post-purchase survey tools measure different things. They complement each other, but neither one gives you the full picture alone.

GA4 shows sessions where the user clicked a link from an LLM interface. It captures ChatGPT, Perplexity, Gemini, Grok, and Claude cleanly when sessions come from those platforms directly. It misses Meta AI (dark social routing), Copilot (Bing attribution bucket), and any LLM-influenced session where the buyer typed a URL or navigated directly after using an AI to research.

Post-purchase surveys capture self-reported discovery, including channels GA4 can’t see. Fairing’s Q4 2025 data showed normalized LLM growth reaching 0.40 by year-end — nearly double the start of 2025. That growth signal doesn’t show up in GA4 proportionally because a significant share routes through dark social. Fairing also found that when UTM parameters are present, Google gets ~20% of last-click credit for sessions that started as LLM-assisted research — another measurement that GA4 misattributes.

The practical approach: run GA4 segments for each of the five trackable platforms as your primary attribution layer. Add a post-purchase survey question (“How did you first hear about this product?”) to capture Meta AI, Copilot-via-Bing, and direct-URL conversions that originated from LLM research. The two sources together tell a more accurate story than either one alone.

How to actually set this up

Three things you need to build:

Source segmentation. Create GA4 channel groupings for each LLM: chat.openai.com, perplexity.ai, gemini.google.com, grok.com, claude.ai. Each one is its own acquisition channel, not a referral bucket.

Behavioral intent mapping. Track product page depth, spec tab views, comparison tool use, and review section engagement. LLM-referred sessions show higher rates across all of these. They’re confirming a decision, not browsing.

Session stitching. Use first-party cookies with a 30-day window for LLM sources. A buyer who came in from Perplexity and converted 11 days later on a direct visit should credit Perplexity for the introduction. The default 7-day window misses a lot of the journey.


Product data optimization by LLM platform

Each model rewards different signals. The table below reflects the patterns from 2025 e-commerce data.

Signal Type ChatGPT Perplexity Gemini Grok / Claude
Rich narrative descriptions Critical Secondary Helpful Helpful
Product schema markup Important Critical Critical Important
Cited original research Secondary Critical Important Important
Customer review volume Critical Important Important Secondary
Press & media mentions Critical Important Important Critical (Grok)
Specification tables Secondary Critical Important Critical (Claude)
Comparison matrices Secondary Critical Helpful Critical (Claude)
Google Shopping feed quality Not applicable Not applicable Critical Not applicable

Fix the basics first (all models)

Before any model-specific work, fix what hurts performance everywhere:

Structured data. Product, AggregateRating, Offer, and BreadcrumbList schema should be correct on all product and category pages. Errors here hurt citability across every LLM.

Brand entity consistency. Your brand name, product names, and category descriptions should match across your site, Google Business Profile, Wikipedia (if applicable), Wikidata, and third-party retailers. Inconsistency confuses how LLMs model your brand.

Answer-ready content. LLMs pull answers to specific questions. For each major product category, write FAQ content answering what buyers actually ask: “What’s the best [product] for [use case]?” “How does [product] compare to [competitor]?” Those pages become the source material.


Frequently Asked Questions

Which LLM drives the most e-commerce revenue?

ChatGPT drives the most volume at 90–97% of LLM referral sessions. Perplexity drives the most revenue per session, with ~$320 AOV versus $204 for ChatGPT. If you need acquisition, ChatGPT is the channel. If you need margin, Perplexity is where the returns are.

How fast is Gemini growing as an e-commerce referral source?

Gemini went from 5.7% to 21.5% of chatbot web traffic in 2025 — the biggest single-year jump in the space. Its e-commerce referral share (~2.6%) lags because Gemini users skew toward productivity tasks, not shopping. Google Shopping integration changes that math though. Expect that 2.6% to move through 2026.

Why does last-click attribution undercount LLM influence?

LLM buyers typically go through multiple sessions across multiple models, then convert days later through a direct URL or email link. Last-click credits that final touch. Multi-brand studies put the miss rate at 60–70% of LLM’s actual influence. The LLM did the work and got no credit.

Should we optimize differently for each LLM?

Yes, once the basics are solid. Structured data, consistent brand entity signals, and FAQ content help across every model. Beyond that: ChatGPT rewards narrative product descriptions and review volume. Perplexity rewards citable research, specs, and comparisons. Gemini rewards Shopping feed quality and schema. Claude rewards long-form specification content. Grok rewards press coverage from high-authority sources.

Is DeepSeek relevant for Western e-commerce?

Not for most Western brands in 2025. The 130M MAU are mostly China, India, and Indonesia. U.S. and European referral traffic is negligible. If you’re building in APAC, add it to your monitoring now — zero to 130M MAU in under a year is a growth rate worth watching.

How do I set up LLM attribution tracking in GA4?

Create GA4 channel groupings for each LLM: chat.openai.com, perplexity.ai, gemini.google.com, grok.com, claude.ai. Set a 30-day attribution window — longer than the default — to account for multi-session journeys. Add event tracking for product page depth, spec section engagement, and review views. Session count alone won’t tell you much about intent.

Why doesn’t Meta AI show up in my referral data?

Meta AI is embedded in WhatsApp, Instagram, Facebook, and Messenger. When a buyer uses it to research your product and then visits your site, the referral header comes from the social app, not from Meta AI. GA4 records the session as “social” or “direct.” Meta AI also doesn’t expose response data through the interfaces that AI monitoring tools use, so it doesn’t appear in Profound, Semrush, or similar platforms. Post-purchase surveys are currently the only way to capture its influence — asking buyers directly how they discovered the product will surface Meta AI-assisted recommendations that no referral data ever will.

Why is Copilot hard to track with AI monitoring tools?

Copilot’s referral traffic is trackable in GA4 through Bing domains, though it shares attribution with regular Bing Search traffic. The gap is in AI monitoring: tools like Profound show 0% citation detection and 0% mention detection for Copilot across large sample sets, meaning the monitoring layer fails entirely. For Copilot visibility, use Bing Webmaster Tools and structured data validation as proxies rather than relying on AI monitoring platforms until their Copilot coverage improves.

Sources

  1. Alhena AI. (2026). LLM Traffic Ranks 4th by E-Commerce Conversion: 329 Brands. https://alhena.ai/blog/llm-traffic-ecommerce-conversion-data/
  2. Alhena AI. (2026). LLM Visitors Show 2nd-Highest Engagement Rate: 2026 Benchmarks. https://alhena.ai/blog/llm-engagement-rate-ecommerce-channel-benchmarks/
  3. First Page Sage. (2026). Top Generative AI Chatbots by Market Share — April 2026. https://firstpagesage.com/reports/top-generative-ai-chatbots/
  4. TechCrunch. (2026, February 4). Google’s Gemini app has surpassed 750M monthly active users. https://techcrunch.com
  5. DemandSage. (2026). ChatGPT Statistics 2026: Users, Revenue & Growth. https://www.demandsage.com/chatgpt-statistics/
  6. DemandSage. (2026). Perplexity AI Statistics 2026. https://www.demandsage.com/perplexity-ai-statistics/
  7. DemandSage. (2026). Grok AI Statistics 2026. https://www.demandsage.com/grok-ai-statistics/
  8. DemandSage. (2026). Claude AI Statistics 2026. https://www.demandsage.com/claude-ai-statistics/
  9. DemandSage. (2026). DeepSeek Statistics 2026. https://www.demandsage.com/deepseek-statistics/
  10. Backlinko. (2026). ChatGPT Stats: Users, Revenue & Growth. https://backlinko.com/chatgpt-stats
  11. Search Engine Land. (2025). What 13 months of data reveals about LLM traffic, growth, and conversions. https://searchengineland.com
  12. Kaiser, M. (2025). ChatGPT Referrals to E-Commerce Websites: How Do LLMs Compare Against Traditional Channels? https://www.maximiliankaiser.org
  13. HockeyStack Labs. (2025). LLM Traffic in 2025: Early Performance, Real Intent, Uneven Results. https://www.hockeystack.com
  14. almcorp.com. (2026). Google Gemini vs ChatGPT Market Share 2026. https://almcorp.com
  15. Fatjoe. (2026). Grok AI Stats April 2026. https://fatjoe.com/blog/grok-ai-stats/
  16. Fairing. (2025). Q4 2025 Post-Purchase Survey Report: LLM Attribution Across 1,000+ E-Commerce Brands. Fairing Inc.

See How Your Brand Appears Across Every Major LLM

Most brands don’t know how ChatGPT, Perplexity, Gemini, or Grok describes their products — or whether they’re being recommended at all. SearchTides maps your AI visibility and attribution gaps across every major model.

Get Your LLM Attribution Audit