AIO / GEO çevresinde bir sürü hype, ancak LLMS’deki sıralamanın bir pratik / inisiyatifte arama motorlarında sıralamadan nasıl farklı olduğunu anlamak için mücadele ediyorum. "Bu konuda ne yapabilirim" Seviye? Herhangi bir öneriniz / ipucunuz / kişisel deneyimleriniz varsa
Not a whole lot.
Firstly, right now – LLMs use Google and Bing to search for topics – tokenizing searches, for any questions they dont have an answer from their training – e.g. Reddit, X, other large content communities = fundamental or base knowledge
* ChatGPT and Co-pilot = Bing
* Perplexity and Gemini = Google.
To rank in base knowledge, you might need to have content in Reddit, Wikipedia – but thats really difficult with closed subs, posts etc
To rank for current content, the LLMs run searches (often grouping similar searches) or even multiple searches.
# Bots on LLMs
There seems to be a lot of confusion with people equating bots to building a search index. Bots are incredibly basic, specialized tools that simply fetch documents and check to see if they’re changed substanitally since previous visits. There are not an indexing system – neither in Google nor LLMs from what we can see
IF you do a search for say “Best CRM for SaaS Tools for companies over 1000 employees – that could be 1 search of 5 searches.
All of the results from G/Bing are snippets and URLs, not the content. The bots fetch those pages and the search is “synthesized”
# Current “AI” Theories
There are some claims that you have to write specially or that schema is some magical data tool / source that LLMs cannot ignore – yet we’ve been able to rank a lot of content without every using either. It seems weird that unless you dont understand Schema or how good LLMs are at managing unstructured data why this would be necessary, with some trying to insist that Schema = Trust – its complete nonsense.
# Synthesizing
Synthesizing is where the LLMs take the documents returned and either give feedbackk, an answer, a list, a comparison, whatever output the user asked for or the LLM thought it should provide.
Typically the LLM will just recite the given contnet verbatim.
# The Illusion of Research
Synthesizing – or compiling the most common patterns from the documents (ok, its a little more complex than that) makes it look like the LLM has performaed lots and lots of research, when as many SEOs are noticing through testing – that it just rewrites what it was given
# Testing bed: Perplexity
We highly recommend testing in Perplexity because it gives you a breakdown of the steps it performed and the search results it took from Google. Here’s an example: