Skip to main content
Field NotesSEO

Field Notes #142: authorship and skin in the game

By December 5, 2025No Comments14 min read

Ryan Law, as he is wont to do, wrote a thought provoking LinkedIn thread:

“It sucks, but we’re gonna keep seeing a deluge of sloppified, self-promotional spam content until AI assistants change some fundamental aspect of how they source and cite information.

In their current form, AI assistants inherently incentivise the worst kind of spam. As soon as you detach information from the source that created it, you remove accountability and encourage publishers to pump out huge volumes of self-serving material.

The framing of content creation changes entirely. It becomes grist for the AI mill; the onus of providing a good experience shifts AWAY from you and your website, and over to the chatbots.

You stop having to worry about pesky things like user experience, interesting and persuasive content, even accuracy and truthfulness… and you start worrying about cramming your brand and its products into as many desirable contexts as possible.

You stop being the restaurateur or bartender, the smiling, accountable face responsible for delivering a great EXPERIENCE alongside the sustenance you offer, and you become the industrial wholesaler, some information middleman whose sole priority is faceless, large-scale output.

The world has a growing suite of tools for tracking visibility in AI outputs (hooray!), but we do not have the equivalent visibility into all the terrible user experiences brands risk creating in the pursuit of AI visibility. The impact of those will be felt on a longer timeline, and I suspect the results will be extremely painful.

I hope that AI assistants will become more selective in their sourcing, “freshness” alone won’t be so over-weighted, we’ll see some conception of EEAT imported more directly into their recommendation process.

In the meantime, I will try and remind people that maximal visibility in AI citations is a terrible, useless goal if you devalue your brand and undermine visitor trust to acquire it.”

This put into words an idea that I’ve been stewing on for a while (the ideas of externalities in content creation far preceding generative AI).

The economics of AI are such that supply is massively increasing due to lower barriers to creation and publication, yet demand remains more or less stable. Along with lowering friction for users, the bar to stand out and garner attention is getting higher given the potentially infinite supply of “good enough” answers. 

As someone who has, for years, spoken about the value of the WHO behind the content, I still believe source credibility will be a major filter from both a user standpoint and a platform delivery standpoint. Thus, it’s likely in your long term interest to avoid Ryan’s “deluge of sloppified, self-promotional spam content.” 

What Is AI Slop, Anyway?

AI slop is a great phrase, though it’s used very broadly. 

I don’t mean to denigrate AI at all; I’m a daily, habitual, probably addicted user, and AI workflows hit just about every corner of our agency.

AI Slop, in a sense, is an implicit message that the creator does not care about the consumer. It is, so to speak, an asymmetric transfer from one party, who expends little work or risk, to another party, presumably who needs to work as hard or harder to discern the message.

Ignoring AI entirely, it’s simply a lazy act of delivering a clearly bad product, message, or experience. You don’t need AI to do this; it just makes it much easier! 

It’s a kind of externalized cost, in some ways similar to the ethics of leaving your shopping cart in the parking lot. It’s not against the law, and surely someone will return your shopping cart to its rightful place. And certainly it’s easier (for you) to leave your shopping cart wherever you’ve parked. But someone has to go pick it up and return it, which is an externalized cost. At scale (if everyone did this), shopping would be miserable or we would have to hire new employees to clean up rogue shopping carts (which increases operating expenses, and probably your groceries). 

This dynamic shows up all over the modern digital economy, but perhaps nowhere more clearly than in the generative content arms race. By and large, as we will see, platforms have incentives to clean up willful spamming (or leaving of shopping carts), lest they create such a harmful user experience that they risk their revenue and monetization. This, as we’ll see, is largely done through user engagement signals and reinforcement learning (users are the distal filters for algorithmic attention). 

You’ve likely seen this in the office, too. 

As the Harvard Business Review recently argued in their provocatively titled article, “AI-Generated Workslop Is Destroying Productivity”, we are not witnessing a renaissance of efficiency but a proliferation of content for content’s sake—a glut of internal decks, Slack threads, and poorly thought-out reports whose cost is not measured in time-to-publish, but in downstream confusion, misalignment, and cognitive load.

Slop, whether in the form of thoughtless memos or SEO listicles, always leaves someone else holding the bag.

Now, at your office, you certainly have skin in the game. If you were to deliver AI workslop to a client or a colleague, you would look like a fool. But in answer engine optimization land, it’s not clear that a publisher will suffer shame or consequence for publishing 1,000 slop listicles targeting every long tail query possible like “best AI CRM for dentists in San Diego.” 

This isn’t theoretical; I sit through sales calls every day, and some percentage of marketing leaders tell me about tactics they are running like this. We, for what it’s worth, were reached out to regarding trading places on each other’s “best agency” lists.

Upon looking at the potential partner’s hundreds of already published listicles – terrible quality, clearly AI written, tons of inaccurate information – we politely bowed out.  

Why Lily Ray is Probably Right

I sat at Profound’s Zero Click conference listening to a panel about the changing nature of search and AEO. While amicable, the discussion split when it came to tactics that are stupidly effective in AI search but that would be deemed “spam” in the classic SEO context. 

Again, things like mass publishing “best product” listicles to hit every single variant of a long tail query. Or Reddit astroturfing. Or (…on and on…) 

It was brought up that there are no quality guidelines for OpenAI like there are with Google. So it’s open season for marketers trying to optimize for AI. 

Lily Ray, however, strongly cautioned against shady tactics, noting that a) there’s a strong intertwining of Google (and Bing) search and AI engines, so if you get zapped from a search index, you’ll likely perform worse in AI engines, and b) over the long arch, these loopholes tend to close, and often in destructive ways for businesses and websites that have exploited them. 

Nassim Taleb talks a lot about avoiding tail risk or complete ruin. To win (and to be exposed to convex optionality), one must first survive. In his words:

“The rules are: no smoking, no sugar (particularly fructose), no motorcycles, no bicycles in town or more generally outside a traffic-free area such as the Sahara desert, no mixing with the Eastern European mafias, and no getting on a plane not flown by a professional pilot (unless there is a co-pilot). Outside of these I can take all manner of professional and personal risks, particularly those in which there is no risk of terminal injury.”

In the delirious search for a silver bullet, we often forgot about the risk of ruin in organic growth, but it has happened all too frequently over the years. 

Lily talks about this (on X) from first hand experience

“Those of you who know my SEO story also know that Penguin destroyed the site I was working on in 2012. It was the first (but unfortunately not the last) time I learned a hard lesson about messing with Google’s spam policies / algorithm updates.

There is a reason I speak so much about this topic – because for every hot shiny SEO/GEO case study, there are just as many sites crashing and burning for using those same tactics because they violate Google’s (and even Bing’s) spam policies.

I will be exactly 0% surprised when we start to see a new wave of crackdowns against GEO spam in 2026.” 

One has to imagine that if a tactic is easy enough to replicate, that it is easy enough to detect. And if it is easy enough to detect, and it harms the user experience, then it will get filtered out by the platforms. 

Platform Incentives and the Tragedy of the Commons

Which brings us to the platforms.

It is tempting to believe that the current wave of low-quality content will persist indefinitely. That AI will usher in a Cambrian explosion of shallow material, and that platforms (Google, OpenAI, Bing) will simply index, surface, and reward whatever gets published.

But that’s a misunderstanding of platform economics.

Google doesn’t make money by indexing content. It makes money (in an overly simplistic sense) by monetizing attention. And if users stop trusting the results (if the well is too polluted) they leave.

Yes, Google could turn every SERP into a paid ad playground. And yes, they would make more money. For a time. But only if those ads still led to utility. When they don’t, the model breaks. This is very commonly spoken about in experimentation, as guardrail metrics that set thresholds for user experience and longer term engagement metrics need to be in place to prevent short term wins from ruining the long term business model. 

So while the tragedy of the commons can manifest, it is self-limiting. There are constraints. There are backstops. Eventually, platforms respond.

Or people do – and I’ve already, at least anecdotally, heard rumblings of a luddite revolution of sorts. A return to IRL events, a disillusionment with social media broadly, a general exodus from the metaverse matrix (perhaps I am in my own bubble, but I tend to trust high signal patterns, even when anecdotal). 

Bad Bone Broth and Why the Storefront Still Matters

Imagine you’re selling bone broth. 

Let’s say you principally do so through the grocery store, and let’s imagine that grocery store to have a reputation for quality (say Whole Foods). 

It presumably costs money to produce bone broth, as well as to package it and place it in a prominent position on store shelves. If this bone broth company were to find a way to lower the cost of production, which would entail a drop in the quality of the product, and simultaneously find a way to flood the shelves with more product (as well as shinier boxes and possibly even large discounts), for the short term, they would likely drive more sales. 

However, eventually, over time, user behavior (always index on the user and their behavior) would change, the distribution platform (the grocery store) would calibrate, and Whole Foods would kick you off the premium shelf space to place a higher value product where you once stood. Perhaps if they are magnanimous, they would still keep your product in the back corner, so the people who love your (degraded) product can still purchase it.

Why? Because your incentives diverged from the platform’s.

The grocery store doesn’t care about your margin. It optimizes for what moves, what satisfies, what brings customers back.

Google is the same. Bing is the same. ChatGPT is becoming the same.

The more polluted their results, the more risk they bear. So unless monetization models radically change, there will be correction mechanisms.

Not because of moral virtue (certainly), but because of economic necessity.

And so your job as a marketer is not just to produce content, but to align with the incentives of your distribution channels, and most importantly, those of your audience. 

You just know the tactics will change. C’est la vie. We like principles for that reason. Because loopholes close. 

Right now, AI answers are heavily grounded in search indices – that may change. Few are talking about personalization in great detail, though I’m seeing more discussion pop up around memory, personalization, and how directional data from AI visibility tools may not capture the precision needed for your ICP. There will be new ad monetization models. 

So what’s a marketer to do?

The ultimate objective, hand wavy though it may sound, is to become the obvious recommendation. 

The default. Ubiquitous. Omnipresent. You’ve got your site, training data across the broad web, citation sources (which are, effectively, other websites answering a similar question or prompt). You’ve got to appear on as many of these as possible, cogently, accurately, and comprehensively. 

Again, in some hand-wavy sense, this means building an excellent product that solves a real pain point, having great customer support and experience, and doing things that are worth talking about (brand marketing). We cannot help a brand optimize their AI presence through small scale Reddit campaigns if their product experience is awful and generates thousands of public complaints per month, ya know?

On another angle, it means dialing in your product marketing so it is very clear (and salient) who you serve, what you do, and what features, proof points, and technical specifications matter. And I know these sound like very basic practices, but they are effective even if they are boring (and as Taleb says,” Skin in the game can make boring things less boring”) 

As of now, we introduce the tactical model

  1. Be the Source
  2. Be Included in the Source
  3. Replace the Source

In creating content (being the source), here’s a heuristic: assume people will click the citations, even if they won’t. It’s a good heuristic to make sure you’re doing things in a way with dignity as well as integrity.

In many cases, users will read these pages. In many cases, they won’t. But by assuming they will, you introduce a level of quality that is worthy of your pride, thus making the content robust in the event that platform filtering does get increasingly sophisticated (and it will).

Want more insights like this? Subscribe to Field Notes

Alex Birkett

Alex is a co-founder of Omniscient Digital. He loves experimentation, building things, and adventurous sports (scuba diving, skiing, and jiu jitsu primarily). He lives in Austin, Texas with his dog Biscuit.