How to write an answer first H2 that ChatGPT will actually cite
The exact heading and first sentence structure we use on RankSmith sites, and why it gets lifted as a citation more often than a well optimised blog post.
If you have read one blog post about ranking on ChatGPT, you have read that answer first copy wins. Every RankSmith engagement starts with a heading audit on the client site, and the same three mistakes show up almost every time. We have spent the last nine months watching which H2s get cited by which engines and which ones get rewritten into nothing. This is what actually works.
What does answer first mean for ChatGPT citations?
ChatGPT's web search, Perplexity, Claude with web tools, Google AI Overviews, and Bing Copilot all share a passage retrieval step. They break the page into blocks, score each block against the user query, and quote the highest scoring block inside a synthesised answer. The heading defines the block boundary. The first sentence after it is the candidate quote.
If that first sentence does not answer the heading question in full, one of two things happens. The engine either skips the block entirely and picks a competitor's page, or it rewrites your sentence into its own words. A rewrite strips the citation. Your domain disappears from the answer.
The fix is mechanical. Write the H2 as a question a real buyer types into ChatGPT. Write the next sentence as a one line answer, under twenty words, containing at least one number, one place, or one specific product name. That is the entire rule.
How does ChatGPT pick a passage to cite?
The passage retrieval step in a modern AI engine is a semantic scoring function. Your H2 plus the first sentence beneath it become a single retrieval unit. If that unit matches the user intent, and the domain carries enough trust signals (schema coverage, named author, recent publish date, inbound links from known hosts), the engine lifts your words into the answer.
Engines differ in weighting but converge on this shape. Perplexity describes its source selection as picking "high-confidence" citations. Google Search Central describes AI Overviews as "grounded in top search results". OpenAI's documentation on ChatGPT's web tool describes the model as quoting short passages that directly answer the request. The mechanics are consistent across all five engines we track: identify candidate passages, rank by how completely they answer the query, cite the top one or two.
A handful of drafting choices reliably blow the selection:
- The first sentence hedges with "we think", "generally", or "often".
- Numbers live three paragraphs down, not in the first sentence after the H2.
- The H2 is a statement not a question, so the engine has to infer the query from context.
- The passage references a figure in a chart or a table not present in the text.
Anything the model has to reconstruct, it will. In the reconstruction it stops quoting. That is the moment your domain drops out of the answer.
What does a real answer first H2 look like on a RankSmith page?
Take the example live on our own pricing page:
How much does a RankSmith website cost in South Africa?
A RankSmith website costs between ZAR 48,000 and ZAR 180,000 depending on scope.
Three things make that block citable. The H2 is a question, not a category label. "RankSmith pricing" matches no real query pattern. "How much does a RankSmith website cost in South Africa" matches the exact phrasing of a query a Johannesburg founder types into Google, Perplexity, and ChatGPT.
The first sentence is under twenty words. It fits comfortably inside an AI Overviews answer box without truncation.
The answer contains two numbers and a currency. Numbers are the strongest signal that a passage is specific enough to cite directly. Engines prefer a passage with a number over a passage of the same length without one, because the number is harder to misquote.
Every answer first H2 we ship has three structural parts. A question trigger word, one of how much, how long, why, when, which. A concrete noun phrase, the product or service, not the abstract concept. And a location or year signal, South Africa, Johannesburg, or 2026. The three parts together turn a heading into a query match.
Which H2 structures get ignored by AI engines?
Three anti patterns we delete on every audit.
Category labels. "Affordable SEO services for every business." There is no question, no answer, no number. The engine cannot match it to a query and cannot extract a claim. It scrolls past.
Marketing slogans. "Unlock your ranking potential." The banned vocabulary alone (unlock, potential) is an AI slop marker, but the bigger problem is that the heading carries no buyer intent. No one types this into a search bar.
Rhetorical questions. "Want to rank higher?" The answer is yes, but the sentence after it never names a price, a method, or a timeline. Engines pick the shorter and more specific competing passage and skip your block.
Every time we ship the answer first version of a heading on a client site, the passage starts showing up inside AI Overviews within two to three weeks. We use Vercel Speed Insights to verify no regressions on Core Web Vitals, and we manually query ChatGPT, Perplexity, Claude, Gemini, and AI Overviews weekly to track which of our recent changes are getting cited.
How do you audit an existing page for answer first coverage?
The RankSmith internal audit is five steps.
- Extract every H2 from the page. The fastest way is to open DevTools and run
document.querySelectorAll('h2')in the console, then copy the text content into a spreadsheet. - Classify each heading as "question" or "statement". Any statement that is not a proper noun is a candidate for rewriting. Brand names are fine, vague labels are not.
- Check the first sentence after each heading. It must answer the question in one line, under twenty words, with at least one specific token. A number, a place, a product, or a year.
- Replace hedge words in the first sentence. Delete "generally", "often", "most", "could", "can help", "helps to". Replace them with a concrete outcome. Do this even if the original was accurate, because hedges drop the citation rate.
- Ship the change. Wait fourteen days. Use the URL Inspection tool in Google Search Console to confirm a reindex has happened. Then run the heading question verbatim through ChatGPT, Perplexity, and Google AI Overviews to check whether the new first sentence now appears inside the answers.
This is the loop we run on every RankSmith engagement. It is boring, it is fast, and between early 2024 and today it has been the single largest lever we have found for visibility inside AI search. Schema still matters. Backlinks still matter. Core Web Vitals still matter. But none of those will save a page whose H2s tell the engine nothing worth quoting.
Every H2 on ranksmith.co.za follows this rule. Scroll through services, pricing, or work and you will see it. The passages are there, the numbers are there, and they are the sentences the AI engines lift when you ask.
If you want us to run this audit on your site, start with the free audit or book a strategy call. We hand back the full heading spreadsheet and the rewrite list within forty eight hours.
Frequently asked questions
Does ChatGPT actually read H2 tags?
How is answer first different from featured snippets?
How long should the answer sentence be?
Do I still need schema if my H2s are answer first?
More field notes
All field notesReady when you are
Want this level of work on your site?
Book a thirty minute strategy call. We will audit your current rankings in Google and in AI engines, and map the fastest wins we can ship in the next sixty days.