Geçtiğimiz birkaç ay boyunca GEO, AISEO ve AEO hakkında, çoğu zaman SEO artık önemsiz hale geliyormuş gibi çerçevelenen pek çok tartışma oldu. Gördüğüm kadarıyla bu çok abartılı geliyor.
LLM araçlarının çoğu hala klasik SEO sinyalleriyle büyük ölçüde şekillendirilen erişim katmanlarına güveniyor. Arayüz ve sıralama mantığı görünüşte farklı görünse bile indeksleme, otorite ve keşfedilebilirlik hala önemli görünüyor. Seçim katmanı değişti ancak girdiler kaybolmadı.
Buradaki diğerlerinin bu gelişmeyi nasıl gördüğünü merak ediyorum. Yüksek Lisans görünürlüğünün geleneksel SEO’dan tamamen kopacağı gerçekçi bir yol olduğunu mu düşünüyorsunuz, yoksa aynı temellerin üzerinde yeni bir soyutlama mı görüyoruz?

Co-Ranking, EMD, and rank-by-proxy have always been a part of SEO and will be more vital in LLM Visiblity
I think they are fairly decoupled now outside of the companies that don’t try and game SEO and actually create honest authoratative content.
SEO is about gaming and owning, LLMs are about understanding your brand, I won’t say visibility, and that understanding comes from fact checking and finding more than one source to back you up.
So if you run an honest SEO strategy you should be aligned, if you run a game you are f’d and have to figure that out.
The same things that matter for ranking algos are going to matter to AIs, bot accessibility/indexation (tech SEO), quality of content & authority of content/site. In what instance is an AI going to prioritise other things instead of these?
LLMs aren’t search engines so I would say no.
[removed]
no. I think they will depend on RAG, which means there will always be some sort of search component.
[removed]
I am sorry it’s irrelevant but how can I post in this community,? as I need some help and my post is getting deleted the moment I am posting it
Fully detached? I doubt it.
Very different? Yeah, definitely.
See, SEO optimizes for ranking inside a system. Accuracy is not a requirement. Anyone who’s been around long enough has seen plenty of top-ranking pages that are flat-out wrong, misleading, or outright scams. That’s not an exception, it’s just how ranking systems work at scale.
On teh otehr hand, LLMs strive for accuracy. Early LLMs were basically locked into their training data, so results were often wrong and hallucinated. Once retrieval layers, embeddings, and external tools came into play, things improved a lot. The easiest way to do that was leaning on the same stuff search engines already had: indexed content, authority signals, widely referenced sources, etc.
What’s changing now is the filtering. LLMs don’t just pick the top result and rewrite it. They compare sources, smooth inconsistencies, and try to converge on something that sounds internally coherent and accurate, even if that means ignoring the top-ranked page. That’s why you’re starting to see answers that don’t line up with classic SEO positions at all.
Right now, LLM visibility still sits on top of SEO-shaped infrastructure. Over time (2 or 3 years give or take), that dependency will probably weaken as models get better at deciding what to trust and what to ignore. This being said, I expect the gap to keep growing, but I don’t see a full break unless LLMs end up owning their own discovery and retrieval layer in a much more direct way.