AI search has created a strange split in digital visibility. Many brands still treat Google rankings as the whole game, while AI-generated answers now shape what users see first, what gets cited, and which brands feel credible before a click ever happens. That gap is where AI Optimizers has planted its flag, and a recent study from the company offers one of the clearest practical looks yet at how traditional SEO signals still influence visibility inside AI systems.

The study, published as “Traditional SEO Still Runs the Show in AI Search,” set out to test AI visibility in a way most commentary in this space does not. Instead of looking at existing rankings and reverse-engineering a theory, AI Optimizers built a clean-room experiment around a fabricated persona and term, “Damoptimize Burtonseoai,” designed to begin with zero presence in Google or AI systems. The goal was to observe, over time, which signals actually changed discoverability across systems such as ChatGPT, Gemini, Claude, Copilot, and Perplexity.

That framing matters because the AI SEO market has become crowded with confident explanations built on guesswork. The study’s baseline was intentionally sterile. According to AI Optimizers, from January through March 2025, the synthetic term returned zero Google results, zero ChatGPT results, and no meaningful recognition in Gemini or related tools. In other words, the experiment started from actual absence rather than low competition or weak prior mentions.

What followed feels less like a growth hack and more like a reminder that search systems still need legible structure before they can trust an entity. The first meaningful variable AI Optimizers introduced was person schema markup on the profile page tied to the invented persona. The company’s interpretation is straightforward: schema did not create instant visibility, but it made the entity machine-readable. That distinction is important because AI systems are increasingly asked to reason about people, brands, products, and relationships rather than simply match words on a page.

The next lift came when AI Optimizers added public social profiles as corroboration nodes. In the study’s telling, schema made the entity readable, while social profiles helped make it believable. That combination marked the point where invisibility began to break. The broader implication is hard to ignore: AI systems do not appear to need a huge volume of content to respond. They need enough consistent signals across multiple public surfaces to decide that an entity is real and distinct.

That conclusion lands at an interesting moment in the broader search conversation. Search Engine Roundtable recently highlighted comments and slides attributed to Google’s Danny Sullivan from a Search Central event in Toronto, stressing the importance of “unique, authentic and non-commodity content.” The write-up summarized Google’s distinction clearly: commodity content tends to recycle general advice, while stronger content brings a viewpoint, specificity, and firsthand expertise that others cannot easily replicate.1

Put the two sources together and a pattern emerges. AI Optimizers’ study argues that AI search still responds to classic quality signals such as structure, schema, and corroboration. Google’s recent public messaging, as reported by Search Engine Roundtable, suggests that content quality itself is also being judged more harshly, especially when it looks generic or mass-produced. In practical terms, AI visibility appears to depend on two things at once: first, the machine needs to understand who or what you are; second, it needs a reason to trust that your content adds something beyond commodity filler.

That second point may be the more painful one for brands. Plenty of companies can add schema. Plenty can publish more pages. Fewer can publish material that a system sees as specific, experience-based, and hard to replicate. Search Engine Roundtable’s examples from the Google event made that contrast vivid. A generic “Top 10 Things to Consider” article was framed as commodity content. A detailed analysis rooted in a real customer case, a real inspection, or a real product failure was framed as the opposite.

AI search makes that difference even more consequential because it compresses the funnel. If an AI assistant surfaces a brand in a synthesized answer, that brand gets preselected as credible before the user visits the source. If the system finds your site readable but your content interchangeable, you may still lose the citation to someone else whose material feels more grounded, more specific, and easier to trust. AI Optimizers’ experiment supports the first half of that equation by showing how legibility and public reinforcement can trigger visibility. Google’s public stance, at least as reported in the Toronto recap, reinforces the second half by pushing creators away from generic summaries and toward experience-shaped content.

There is another revealing piece in the AI SEO company’s study: the confusion phase. Once multiple signals around the invented persona accumulated, ChatGPT reportedly began blending “Damoptimize Burtonseoai” with the real Damon Burton identity because of semantic similarity, overlapping context, and proximity in the web environment. AI Optimizers then introduced disambiguation language in schema to separate the two. That correction worked as part of the entity-cleanup process. For brands, this is a useful warning. AI visibility is not just about being seen. It is also about being seen correctly. Sloppy naming, overlapping product identities, or weak separation between sub-brands can create confusion that AI systems try to resolve on their own.

This is where the AI SEO conversation becomes more mature than the usual “write for robots” caricature. The real work looks more like digital identity management combined with editorial discipline. You need structured clarity. You need corroboration across public nodes. And you need content that demonstrates something specific enough to stand apart from the growing flood of templated web copy. That is a tougher assignment than just ranking a page for a keyword, but it also feels more durable.

For founders, CMOs, and search leads, the immediate takeaway is practical. You do not need to choose between traditional SEO and AI search visibility. The AI Optimizers study suggests that traditional quality signals still shape AI outcomes. But you also cannot assume those signals alone will carry the day. Google’s recent content guidance, at least in the way Search Engine Roundtable reported it, points toward a harsher environment for generic, replaceable publishing. Structure gets you understood. Distinctiveness gets you preferred.

That combination may define the next era of search more than any catchy acronym does. AI search does not appear to be replacing SEO so much as exposing which parts of SEO were always foundational and which parts were shallow shortcuts. AI Optimizers’ study gives the market a rare controlled example of how visibility begins. Google’s public messaging adds a clear editorial standard for what kind of content deserves to survive after that. Together, they suggest a future where the winners will be the brands that are easiest to identify, hardest to confuse, and most useful to cite.