ProductSolutionsCase StudiesFAQPricingDemosBlog
Login
Growth

Why AI Search Engines Are Sending Me Better Leads Than Google (And How I Built For That)

AI-referred traffic converts 4x better than organic search. Here is exactly what I did to make Clarm visible to ChatGPT, Perplexity, and Claude — and what I learned along the way.

Marcus Storm-Mollard
September 2026
11 min read

The Traffic That Changed How I Think About Content

A few months ago I started noticing something strange in our analytics. A segment of visitors was arriving on the Clarm website with no referrer that matched any of our SEO sources. They were not coming from Google. They were not coming from our LinkedIn posts or our community channels. They were coming from ChatGPT. From Perplexity. From Claude.

What struck me was not just where they were coming from—it was what they did when they arrived. They engaged at a completely different rate than organic search traffic. They were asking specific, informed questions. They already had context about what Clarm does. Their buying intent rate was significantly higher.

I started pulling the data more carefully. AI-referred visitors were converting to demo bookings at roughly four times the rate of standard organic traffic. They stayed on site longer. They asked better questions in the chat widget. They were, by almost every measure, better leads.

That observation changed the way I think about content and what I optimise for.

Why AI-Referred Traffic Is So High Intent

When someone searches Google for “AI lead capture tools,” they are in discovery mode. They are building a list. They will visit six or eight sites, read a few comparisons, and form an opinion over days or weeks.

When someone asks ChatGPT “what is the best AI revenue desk for a technical founder who needs HIPAA compliance and does not want to hire a sales team,” something different happens. The AI has already filtered, compared, and qualified. It is presenting a specific recommendation to a specific question. The person clicking through has not started their research—they are finishing it.

That is why AI-referred visitors convert so well. They have been pre-qualified by the model before they arrive. The model's recommendation is already a form of endorsement. The visitor arrives with significantly higher intent and significantly more specific expectations.

Understanding this changed my content strategy. The question was no longer just “how do I rank on Google for this query.” It became “how do I become the source that AI models cite when someone asks the questions my customers are asking.”

What I Actually Did: The GEO Stack

GEO—Generative Engine Optimization—is still a young discipline. There is not yet a canonical playbook the way there is for SEO. But through experimenting with what makes Clarm more visible to AI models, I have developed a stack of practices that I am reasonably confident in.

1. The llms.txt file

The first thing I built was a public/llms.txt file at clarm.com/llms.txt. This is an emerging convention (similar to robots.txt but for AI models) that gives language models a structured, authoritative overview of what a product is, what it does, who it serves, and what the key proof points are.

The file at Clarm's root contains: what Clarm is (one clear sentence), the three core value propositions, key customer outcomes with numbers, the ICP description, compliance posture, and links to the most important product pages. It is written to be parsed, not to be beautiful.

This is one of the simplest GEO changes you can make. If you do nothing else, create a well-structured llms.txt and put it at your domain root.

2. Citation-first content structure

AI models extract citations from content. They are looking for clear, direct, specific claims that answer the kind of question a user might ask. The more your content looks like a citable answer—rather than a narrative essay—the more likely it is to appear in AI responses.

Concretely, this means I restructure article introductions to lead with the answer, not the setup. Instead of “Many businesses struggle with lead capture...”—which is setup—I write “AI-first inbound capture converts 5–12x more visitors into qualified leads than standard form-based capture.” That is a citable claim. AI models can lift it directly in response to the right question.

Every article on the Clarm blog now has: a direct opening answer, a structured data section (table or numbered list) in the first third, an FAQ block near the bottom, and at least one proof strip with specific metrics and source attribution.

3. Structured data (JSON-LD)

Every article on the Clarm blog includes Article schema with author, date, publisher, and description fields fully populated. The homepage includes Organization and WebSite schema. The product pages include SoftwareApplication schema where appropriate.

I do not know exactly how much AI models weight structured data versus body content, but I know that it is one of the clearest signals of content authority and authorship. An article with Article schema attributing it to a named author, published by a verified organisation, is a better citation candidate than an unstructured page.

4. Authoritative named attribution

AI models give weight to named experts. A claim attributed to “Marcus Storm-Mollard, CEO of Clarm” is more citable than an anonymous claim attributed to “the Clarm team.” This is partly because named attribution is a trust signal, and partly because it gives the model a clear entity to reference.

Every piece of first-person content I write on this blog uses my name, my title, and my specific background as context. The eight languages. The Deutsche Bank experience. The YC batch. These are not vanity signals— they are entity attributes that make the authorship claim more specific and therefore more trustworthy to a model that is trying to decide whether to cite this content.

5. FAQ blocks on every article

FAQ sections are probably the single highest-ROI GEO element. They are structured like the questions AI models receive. They are easy to extract as direct answers. They almost always correspond to the tail queries that follow a primary search.

Every article I write now includes an FAQ block near the bottom. The questions are written to match how someone might phrase them in ChatGPT or Perplexity, not how they might phrase a Google search. “What is the best AI lead capture tool for a solo founder?” is a GEO-optimised FAQ question. “Best AI lead capture tool” is an SEO-optimised keyword. They look similar but are meaningfully different.

6. Internal linking with intent labels

AI models use link context to understand content relationships. When I link between Clarm blog posts, I write the anchor text as a description of the destination content's answer, not just its title. “How to capture inbound leads without a sales team” is a better anchor than “this article.” The link context tells the model what the destination answers.

What the Results Looked Like

I want to be honest about what is measurement and what is attribution here, because GEO measurement is imprecise. AI engines do not always pass referrer information. A person who gets a recommendation from ChatGPT may search Google before clicking through, making the visit look like organic search.

What I can say with confidence: since implementing these changes, the quality of our inbound has meaningfully improved. The conversations that start in our website chat widget are more specific. The questions reference Clarm by name more often, which suggests the visitor arrived with a recommendation rather than in discovery mode. Demo booking conversion from first-touch chat has increased.

I can also say that when I test the relevant queries myself—asking ChatGPT, Perplexity, and Claude the questions our ICP would ask—Clarm now appears in a significant proportion of responses where it did not appear before. That is directional validation, not precise measurement. But it is consistent.

The Most Important Reframe

The most useful mental model I have found for GEO is this: you are not optimising for clicks. You are optimising for citations.

An AI model that cites you is not just sending you traffic. It is recommending you. It has pre-qualified the visitor. It has framed your product in a specific context that the visitor already accepts. The conversion path from that recommendation to a paid customer is dramatically shorter than from a cold organic search.

SEO and GEO are not in competition. SEO builds the foundation—crawlable, indexable, well-structured content that answers real questions with real specificity. GEO is what happens when you optimise that same content to be extracted, cited, and recommended by AI models rather than just ranked. The fundamentals are shared. The format optimisations are distinct.

If you are building a B2B SaaS product and you have not started thinking about GEO, I would start today. The models that your buyers use to research tools are being trained right now on the content that exists right now. The companies that show up in those recommendations consistently—not just once but every time the right question is asked—are building a distribution advantage that will compound.

FAQ: GEO for SaaS Founders

What is GEO and how is it different from SEO?

GEO (Generative Engine Optimization) is the practice of structuring content so that AI models like ChatGPT, Perplexity, and Claude are more likely to cite it in their responses. SEO optimises for ranking in search engine results pages. GEO optimises for being recommended by AI assistants. The underlying content quality requirements overlap; the format and structure optimisations differ.

What is llms.txt and should my SaaS have one?

llms.txt is an emerging convention—a plain text file at your domain root that gives AI models a structured, authoritative description of your product. Think of it as a business card for AI models. Yes, every SaaS product should have one. It takes 30 minutes to write and may be one of the highest ROI GEO investments available.

How do I know if AI models are sending me traffic?

AI referrals are imperfectly tracked. Some models pass referrer data (Perplexity is relatively transparent), others do not. The best proxy is to regularly test your target queries directly in ChatGPT, Claude, and Perplexity and see whether your product appears. You can also ask customers in your onboarding how they found you—an increasing number will say “ChatGPT told me about you.”

How long does GEO take to show results?

GEO is faster than SEO in some dimensions (no waiting for pages to rank) and slower in others (models retrain on new data on their own schedule). In my experience, structured content changes started showing up in AI recommendations within 4–8 weeks. Consistent citation positioning took 3–4 months of consistent content quality and structure.

Going Deeper

For the content strategy side of this, read the 7-layer developer growth engine. For a tactical look at converting the traffic that arrives from AI engines, see the best tools to convert website visitors into leads.

Explore more from Clarm

Helpful links to the product, demo, and policies - all in one place.

Get new Clarm articles

Join the monthly roundup of inbound revenue, buyer intent, and lead conversion tactics.

No spam. Unsubscribe anytime.

Ready to automate your growth?

See how Clarm can help your team capture more inbound without adding headcount.