SEO

Performance

Apr 18, 2025

Apr 18, 2025

The Six Generative Engines Reshaping How Australians Search

ChatGPT, Claude, Gemini, Perplexity, Grok and DeepSeek now collectively answer a growing share of search questions. Here is a calm tour of who they are, how they differ, and what each means for brand visibility.

image of Arno

Arno Verburg

Founder

image of Arno

Arno Verburg

image of Priya

Rob Benkovic

Head of Operations

image of Priya

Rob Benkovic

A small but determined share of search behaviour has moved into AI conversations over the last two years. The exact numbers vary by survey, but every reliable measurement points in the same direction. The thing once called "search" has begun to spread itself across more than one product, and most of the new products do not look much like Google.

This article is a tour of the six generative engines that matter most for Australian audiences right now. It is descriptive rather than evaluative. Each engine has habits worth knowing if a brand wants to be cited well, or at all.

A note on what counts as a search

Before introducing the engines, a definition is useful. A generative search is any moment where a user asks a question of an AI system with the intention of finding out about the world, rather than asking the AI to write something for them.

By that definition, a user asking ChatGPT for a list of accountants in Brisbane is doing a search. A user asking ChatGPT to draft an email is not. The same product is doing two different jobs, and only the first one matters for AEO.

The implication is that any single measurement of "AI search market share" is rougher than it sounds. We can measure prompts, responses and citations precisely. The question of which prompts count as search is judgement, not arithmetic.

ChatGPT

ChatGPT remains the most-used AI system globally, and its share of search-style queries is the largest of any generative product. For many users, particularly under the age of forty, it is now a first port of call for the kind of research question that used to begin with a Google search.

Its citation behaviour leans toward established sources. Wikipedia, large publications, well-known reference sites and government and educational domains dominate when sources are shown. For brand citations in commercial queries, ChatGPT tends to be conservative, preferring to name a small number of brands per answer rather than a long list.

For an Australian brand, two things tend to be true. First, ChatGPT often knows the brand, including some surprisingly specific facts, even when the brand has done no AEO work. Second, the version of the brand it knows is sometimes a few years out of date. The corrective is the same corrective that helps elsewhere: improve the off-domain footprint and make sure the home site is parseable.

Claude

Claude has a different temperament. Its answers tend to be more cautious, more measured, and more likely to acknowledge uncertainty. It cites less often than ChatGPT when not specifically asked to, but its citations are typically accurate when they appear.

Claude is interesting for AEO in two ways. The first is that it tends to behave like a sceptical analyst, which means that brands with a clean and consistent description across the open web tend to come through clearly in its answers, while brands with a contradictory off-domain footprint tend to be hedged or omitted.

The second is that Claude is increasingly used for the kind of long, structured research queries that influence enterprise buying decisions. A brand that wants to be considered in a B2B comparison conversation should care about Claude.

Gemini

Gemini is woven into Google's broader product surface. It shows up inside Search as AI Overviews, inside Gmail and Docs as Help me write and similar features, and as a standalone chat product. For Australian users it is the most likely AI tool to be encountered without seeking it out.

Citation behaviour in Gemini tracks Google's own surface, which is both an advantage and a constraint for AEO. The good news is that classical SEO work pays off in Gemini citations. The less good news is that the broader sources beyond Google's index, which other engines draw on, are weighted less here.

For a brand that has spent two decades getting good at classical SEO, Gemini is the easiest of the six engines to influence. For a brand whose strength is off-Google, it is the hardest.

Perplexity

Perplexity is the engine where citations are most visible to the end user. Every answer is built as a synthesis of named sources, with the sources listed at the top of the answer and linked inline.

This makes Perplexity disproportionately important for two reasons. The first is that visible citations carry click-through value. Users actually open Perplexity citations, often. The second is diagnostic. A brand that cannot earn citations in Perplexity is almost certainly being missed by the other engines as well, even where their citation behaviour is less visible.

Perplexity rewards content that answers questions directly, in a structure the engine can quote. It is the friendliest of the six engines to good editorial work on a brand's own site.

Grok

Grok sits inside the X platform and is pulled disproportionately toward live, conversational signals. It cites things the other engines miss, particularly around news events, emerging products, and any topic that has recent discussion on X.

For AEO, Grok is volatile. A brand can be invisible in Grok for months, get mentioned in a single high-engagement post by a credible account, and become routinely cited within a fortnight. The reverse is also true. This makes Grok harder to optimise systematically than the others, but it also means that a brand's social-adjacent reputation matters here in a way it does not elsewhere.

For Australian brands, Grok citation behaviour is increasingly relevant for any topic that is also a conversation. Categories where commentary, news or community discussion drives buying decisions, including media, finance, sport and politics-adjacent industries, see Grok appear in research journeys more often than its overall usage share would predict.

DeepSeek

DeepSeek is the newest of the six for most Western audiences and has gathered serious attention quickly. Its answers are noticeably good for the cost, and its citation behaviour is still being mapped by the AEO community.

What is clear so far is that DeepSeek prefers sources that are richly structured and easy to parse. Schema-marked content, clearly written reference pages, and well-organised category structures appear to be over-represented in its citations relative to the open web average.

For a brand that has invested in structured data and clean information architecture, DeepSeek tends to be a quiet beneficiary. For a brand whose authority is built on softer signals, it is a harder engine to win in.

What this means for measurement

Two practical conclusions fall out of the engine survey.

The first is that single-engine tracking is misleading. A brand can be doing well in ChatGPT and quietly losing share in Perplexity for the same query, or holding steady in Gemini while drifting in Claude. The aggregate picture only emerges if all the relevant engines are measured together.

The second is that the right mix of engines depends on the category. A consumer brand with a heavy local component will care most about ChatGPT, Gemini and Perplexity. A B2B brand will care more about Claude. A brand whose category is socially driven will care about Grok. A brand competing on rich product detail will care about DeepSeek.

The honest position for most brands is to measure all six and to weight them by where their customers actually are, not by the engines they personally use.

A short, low-pressure invitation

We built Outercite because the engines above are not going to stop multiplying, and because the brands that pay attention earliest tend to find the work much easier later. There is no urgency to this article. It is a tour, not a pitch. If a baseline measurement across the six engines would be useful for your brand, that is a conversation we are always happy to have.

Common questions

Are there other engines I should measure? A few more are worth watching, including You.com and the AI features inside Bing. For most Australian brands the six above cover roughly 95 percent of the addressable AI search audience.

Which engine has the highest usage in Australia? ChatGPT, by a clear margin, with Gemini second largely on the back of being embedded in Google products people already use. The order changes in specific age groups and industries.

Do the engines learn from each other? Not directly. They train on overlapping web data, however, which means a strong off-domain footprint tends to lift all of them at once, even though the engines themselves are not communicating.

Should I optimise for one engine first and broaden later? Generally no. The work that helps one engine usually helps the others, and single-engine optimisation tends to be a false economy. Measure broadly, act broadly.