Skip to main content
Field NotesSEO

Everyone’s a Blackbelt on YouTube

By February 13, 2026No Comments14 min read
Everyone's a Blackbelt on YouTube

I’ve been practicing Brazilian Jiu-Jitsu for about ten years now. Coincidentally, that’s roughly how long I’ve been working in SEO.

The overlap isn’t lost on me.

Spend enough time in a BJJ gym and you notice how much information newer practitioners bring with them today. They’ve watched instructionals. They can explain how a technique is meant to work before they’ve ever applied it successfully.

When sparring starts, that knowledge often fails to translate. Techniques get tried at the wrong moment. Sequences fall apart as soon as pressure increases.

The information isn’t wrong. It’s just been learned without context.

They know individual moves, but they don’t yet understand how decisions compound or how situations change once resistance is introduced.

The difference only becomes visible once real pressure is introduced.

SEO and GEO is moving into a similar phase, and I’m seeing it play out in real time with clients and prospects alike.

The Tactical Layer Is No Longer Defensible

AI has removed much of the friction from executing the basic mechanics of SEO.

I’m actively watching clients experiment with new tools that help them publish faster than ever before or fill obvious gaps in their content libraries.

In a few cases, the early results look encouraging. Pages start indexing. Rankings move.

What’s consistent is that the work itself no longer creates separation.

Most of these tools rely on the same inputs and produce outputs that look increasingly similar. They analyse what already ranks and reconstruct it into content that meets the same expectations.

That approach doesn’t suddenly stop working. It just stops creating advantages.

This is the paradox at the centre of the current moment. Teams have to adopt these tools to remain competitive, but adopting them won’t produce a lasting advantage.

When everyone gains access to the same efficiency, the efficiency itself stops being useful as a differentiator. You’re running faster just to stay in place.

The economics are straightforward: AI collapses the cost of content production, supply surges while attention stays fixed, and each piece is worth less as the barrier to entry lowers for everyone at the same time.

The more interesting question is where advantage actually lives once the tactical layer flattens.

As more teams rely on the same workflows, outcomes depend less on execution quality and more on factors outside the content itself.

Brand strength and existing distribution start to matter more.

The competitive math has changed too.

A few years ago, you were realistically competing with ten or twenty pages for a given position. When AI drops the cost of content creation, you’re suddenly in a field of hundreds. Search results and AI citations still have limited inventory. Content that would have ranked comfortably as a seven out of ten now disappears under the weight of increased supply.

At that point, SEO can feel unreliable even when teams are following best practices. The practices haven’t stopped working. They’ve just stopped being rare.

Why Attribution Misses Most of What Matters

The discussion around AI-driven discovery tends to focus on how little traffic these systems appear to send.

That framing assumes clicks are the primary way value shows up. In practice, that hasn’t matched what I’m seeing.

When a brand appears in an AI-generated answer, the impact often shows up later and somewhere else. Buyers use these tools early to get oriented and explore categories they don’t yet have language for. Many stop using those tools before they ever evaluate vendors directly.

Others move between AI, search, peer conversations, and internal research, bouncing across channels multiple times before they ever enter a pipeline. The path from prompt to purchase is rarely linear, and the touch points that shape a decision are scattered across channels that don’t talk to each other.

By the time someone searches a brand name or talks to sales, much of the framing work has already been done. Attribution models tend to credit the final interaction and miss the groundwork that made it possible.

This is why debates about whether AI search represents a small or large share of traffic miss the point. Much of the influence never registers as traffic at all.

But this isn’t actually a new problem. Attribution systems have always been rigid and have always misrepresented how real buying journeys work.

Marketers have a tendency to shape their worldview around what they can measure, rather than looking at how decisions actually get made and finding ways to approximate that reality. AI didn’t create this gap. It made it impossible to ignore.

When we surveyed marketing leaders on this exact tension, the pattern was clear: teams know their attribution is incomplete, but most still default to the metrics their tools surface rather than rethinking what they should be tracking in the first place.

The newest version of this behaviour is the growing trend of AI visibility monitoring.

A growing industry has emerged around tracking how often and where brands appear in AI-generated answers. There’s value in understanding that picture, and we report on citations and mentions ourselves. Where teams get stuck is mistaking visibility for progress. They refresh outputs and catalogue changes, producing reports that feel active without changing what the model encounters the next time it pulls sources. These systems are probabilistic. Outputs shift between runs and updates. When those fluctuations are treated as a signal in isolation, noise starts to look like insight. Monitoring tells you where you stand. The leverage still comes from improving the inputs.

We can observe parts of the picture. We can see where brands appear and how they’re described. What we can’t do cleanly is tie those moments directly to revenue in a way that satisfies traditional attribution models.

Being Cited Is Not the Same as Being Chosen

One thing I’ve seen repeatedly is that producing content can increase the likelihood of being cited by AI without increasing the likelihood of being recommended.

I’ve watched brands publish AI-spun content that gets picked up quickly as a source in AI-generated answers. The brand name appears. The link is there. But when the model is asked who to use or what to choose, that same brand isn’t suggested.

Being cited and being endorsed are different things.

AI systems will happily reference content that’s correct and topically relevant. Recommending a brand seems to require stronger signals. Clarity about who the product is for, evidence of real-world use, and language that reflects how buyers actually evaluate options.

This matters because it reveals a structural problem with relying on content alone.

In traditional SEO, you could generally solve a visibility problem by creating the right page for the right keyword. In AI environments, creating more content often doesn’t solve the problem you’re actually trying to solve, which is shaping perception within answers.

AI engines pull from dozens of sources, most of which sit beyond a brand’s owned channels. If the broader ecosystem (reviews, industry coverage, community discussions) tells a different story than your website does, the AI will reflect that broader narrative. No amount of owned content will override it.

G2’s recent acquisition of Capterra, Software Advice, and GetApp from Gartner is a concrete example of how this plays out.

That deal gives G2 roughly six million verified reviews and two hundred million annual buyers, consolidating the infrastructure that AI systems rely on to form opinions about software. They saw this shift coming and moved to own the infrastructure before most vendors realised it mattered. That makes them kingmakers, and vendors should expect pricing to reflect that kind of position. 

The gap between being referenced and being chosen gets closed by work that doesn’t fit neatly into a monthly report: sharper positioning, stronger editorial standards, credibility signals that exist beyond your own domain. Without those, brands risk showing up in answers without ever becoming the answer.

Off-page work like PR, earned coverage, and third-party validation is no longer a nice-to-have. It’s a requirement of how these systems form opinions.

What I’m Confident About and What We’re Testing

I want to be clear about what’s proven and what isn’t.

I don’t have conclusive evidence that content built from original inputs will always outperform AI-generated content in revenue terms. What I do have are early signals, repeated patterns from past shifts in SEO, and a strong rationale for why this matters as the tactical layer flattens.

As execution becomes easier to copy, differentiation shifts toward how problems are explained and how categories are understood. AI tools are good at summarising consensus. They struggle when the source material is thin or unresolved.

That’s where original research and firsthand experience matter.

Proprietary data compounds in ways that generic content can’t: it earns citations, fuels distribution, and gives AI systems something genuinely new to reference.

We’ve spent years refining how we extract useful insights from subject matter experts. Most interviews produce generic observations. The useful ones surface language buyers actually use, problems they didn’t realise they had, distinctions that change how a category is perceived. That source material doesn’t exist anywhere else, which is why AI tools can’t easily reproduce it.

Another piece that’s difficult to replicate is editorial judgment.

We have editorial leads on the team who approach content with the same standards you’d expect from experienced journalists or authors. Some have worked in those roles before. Others are building toward them. What matters is how they think about the work.

They’re not there to write faster or hit word counts. They take something from functional to excellent.

That gap between a seven and a ten is where most content fails. It’s also where user response gets determined. Content that reads well holds attention. Content that flows naturally gets referenced and linked to. Those signals matter as much as anything technical, and often more. These are hard to fake signals, the things that can’t be shortcut, which is why they compound over time.

AI tools don’t see that gap. They can’t tell when an introduction drags or when a section should be cut because it breaks momentum.

We test this approach with our own content. When prospects search for terms like “best B2B SEO agencies” in ChatGPT or Perplexity, we show up consistently. Not because we gamed anything, but because we’ve invested in original research, long-form thinking, and ongoing distribution through podcasts, newsletters, and our network of clients and partners. Most prospects who reach out mention encountering us through one of these channels.

That’s intentional. It’s the same strategy we run for clients.

This work doesn’t maximise short-term traffic. It does tend to compound and resist imitation, and it aligns more closely with how real buying decisions get made.

Speed Still Matters, Just Not the Kind You Think

Clients want to see movement quickly, and that expectation is reasonable.

AI is a big part of that. AI is useful for baseline coverage and handling labour-intensive work like internal linking. Where it falls short is anything that requires judgment about framing or positioning against alternatives.

Used deliberately, it accelerates learning. It helps test interest and identify where deeper work is justified.

Problems arise when that phase becomes permanent.

I’ve seen this cycle repeat often enough to recognise it. Each wave of SEO tactics works until it saturates. The ultimate guide phase. The checklist era. The SEO heist. Each time, the cleanup is expensive if you’ve chosen to go all in. 

Any given tactical advantage now degrades faster than it used to.

When efficiency gains are available to everyone at the same time, the window between “this works” and “this is table stakes” compresses. Add to that the fact that every model or algorithm update could reset the playing field. The citations you earned last quarter may not carry forward once the training data refreshes and the weighting shifts. Fixating on yesterday’s snapshot of a system that’s already moved on just doesn’t make sense.

What matters more is the speed of iteration: how quickly a team can recognise when a tactic has saturated, extract what it learned, and shift to the next approach. But agility is not the same as thrashing. The teams that endure aren’t the ones chasing every new spike in a chart. They’re the ones that can absorb volatility without being consumed by it.

When Content Becomes Table Stakes

As content production becomes easier, advantage shifts to the layers that remain hard to replicate.

Distribution beyond owned channels. Earned coverage that reinforces what your content claims. The signals that AI systems weigh when deciding who to recommend, not just who to cite.

Content still matters. But when many teams can publish similar material, advantage comes from how widely and credibly that material is reinforced elsewhere.

This requires broader coordination than traditional SEO. It touches brand, communications, and partnerships, and none of it fits neatly into a single dashboard. That complexity isn’t comfortable, but it’s where the real work sits now.

Back to the Mat

In BJJ, the moment you realise YouTube techniques don’t work against real pressure is when learning actually begins.

You start paying attention to things you couldn’t see before. Weight distribution. Grip fighting. How to create problems several moves before you attempt a submission. The fundamentals stop feeling basic and start revealing themselves as the entire game.

SEO and GEO are reaching that point. The YouTube phase worked well enough to be convincing. Now teams are encountering real pressure, and the limitations are becoming harder to ignore.

The teams that adapt won’t be the ones with better tools or more budget. They’ll be the ones that recognised the fundamentals had shifted and adjusted before the market forced the change.

Want more insights like this? Subscribe to Field Notes

Ben Major

Ben is the Director of Organic Growth Strategy at Omniscient Digital. He has spent the last 10+ years of his career agency-side, previously leading SEO and Operations at Skale, where he was employee #1. When he’s not running the muddy trails of the Cotswolds with his golden retriever or talking in the third person, Ben is an avid BJJ practitioner/junior coach, an amateur guitarist, and an extremely amateur golfer.