- Call To Action
- Posts
- Our content shows up in AI Overviews
Our content shows up in AI Overviews
...now let’s make sure yours does too.
Here’s the reality of content in 2025: Users might not click your site right away.
But if your brand keeps showing up in AI chats, they’ll remember it.
And that brand recall? It’s quietly becoming more powerful than rankings.
We never set out to rank in AI Overviews.
We didn’t optimise for ChatGPT, or try to get mentioned in Perplexity answers.
But then, it started happening.
Quietly. Consistently. Organically.

It wasn’t some hack that we tried, we just did content differently.
We weren’t chasing what other blogs were doing. We weren’t looking at what ranked.
We asked:
What’s missing?
Is this content actually helping the audience?
If not—why? And how can we make it better?
When someone asks ChatGPT, “What’s the best ABM strategy for an enterprise SaaS brand?”, it’s not pulling from the blog that ranked #1 for “ABM strategy.”
It’s pulling from the blog that answered that exact question with clarity, structure, and context.
Turns out, ranking high doesn’t guarantee visibility in AI chats.
LLMs surface the content that best explains what the user is really trying to figure out.

The Process That Got Us There (Without Gaming the System)
We didn’t stumble into AI visibility overnight. But here’s what helped us consistently show up in LLM responses:
1. We started with better questions, not keywords
Instead of asking “How do we rank?”, we asked “What’s the real problem people are trying to solve here?”
Our goal: Find the gaps no one was filling.
We look for the real pain points people are actually talking about (and those that they don’t talk about); the kind that keep them stuck, searching for better answers.
Here’s how you can do it:
To create content that shows up in AI responses, you need to understand not just what people search but why they’re searching.
Go to Reddit, Quora, Slack groups, YouTube comments, and Twitter threads.
Look for recurring frustrations, not just questions.
Watch for how people phrase things—because that’s how AI learns what to surface.
Pro tip: Keep a swipe file of every great insight, stat, or quote you find.
Organize it by topic. It becomes your research vault and speeds up content creation tenfold.
Why it matters:
AI tools don’t just surface content based on keywords.
They prioritize clarity and intent.
To align with that, you need to understand how people ask questions, why they’re asking them, and what pain points they’re trying to solve.
2. The outline is everything
If you nail the outline, the content almost writes itself.
Our outlines are built around:
What’s not being explained clearly?
What’s outdated or overused?
What questions are still left unanswered?
LLMs don’t pull fluff, they pull structure.
The clearer your outline, the better your content will perform.
Only then do we begin the actual writing process.
Here’s how you can do it:
Before writing, build an outline that solves real user problems—not just ranks for keywords.
Start with the questions you found in research.
Organize your outline around question-style H2s that mirror actual queries:
“What does an ABM strategy look like for a mid-size SaaS company?”
“How do I choose between OKRs and MBOs for sales compensation?”
Why it matters:
AI models prefer content with clean, understandable structure.
When your outline follows a logical flow and your H2s reflect real user questions, it becomes easier for LLMs to extract relevant answers.
3. We don’t just explain, we show
Let’s say we’re writing about ABM tools.
Most blogs stop at pros and cons.
We go further. We bring in real experiences from people who’ve used those tools in actual campaigns.

If we’re explaining how to build an ABM strategy, we don’t list generic steps.
We show how we built one for 3 different offices and how you can do it too.
Our goal is simple: Help someone do the thing. Not just read about it.
LLMs love content that feels human and helpful. Not robotic and repetitive.
Here’s how you do it:
Break your content into clean, skimmable pieces:
Bullet lists for takeaways
Numbered steps for how-tos
Section-level FAQs to handle edge cases
Cite credible sources: Bring in stats from Salesforce, or Gartner, or ICONIQ. Add expert quotes. Reference case studies.
These build authority which AI systems treat as a signal of quality.
Why it matters:
Modular, skimmable content helps LLMs understand and repurpose your answers in snapshots or summaries.
Adding stats, expert quotes, and credible sources builds authority, something AI systems use to assess reliability.
And real-world examples increase engagement and memorability, making your content more valuable to both the model and the reader.
5. We edit like our readers’ time depends on it (because it does)
Once the content is written, we go into editing mode.
Fluff? Gone.
Filler lines? Deleted.
Anything that wastes the reader’s time or doesn’t add value? Cut ruthlessly.
If someone’s reading your blog in an AI-generated summary or overview, you have one shot to be clear, helpful, and memorable.
Here’s how you can do it:
Before you publish your content, ask yourself:
“Is this sentence helping or just filling space?”
“Can I say this with fewer words and more clarity?”
“If AI pulled this paragraph into a snapshot, would it make sense out of context?”
Remove filler. Cut transitions that add no value. Sharpen examples.
Why it matters:
LLMs often extract just a paragraph or two.
If your sentences are bloated, vague, or context-dependent, your content won’t make sense in isolation.
Sharp, well-edited writing improves clarity, increases extractability, and builds trust.
Creating content that ranks in AI chats is about being the most useful, clear, and human voice in the room.
You don’t need to be first on Google.
You just need to be the one that answers the question best.
So next time you’re staring at a blank doc, don’t ask: “How do I rank?”
Ask: “How do I help someone solve this?”
Because when your content genuinely helps, AI remembers and so will your readers!
Until next time,
Karthick Raajha.