Skip to main content
Field Notes

Field Notes #040: What People Say vs What They Do

what people say vs what they do

For many years of my career, I worked on experimentation and conversion rate optimization.

If you’re unfamiliar, the process goes like this:

  • Conduct research and analysis to identify improvement areas in a digital experience (website, product, etc.)
  • Conduct research to ideate potential solutions or hypotheses
  • Run controlled experiments on your hypotheses
  • If an experiment wins, ship it live. If it doesn’t, keep trying.

Research is incredibly valuable, but here’s what they don’t tell you: what people say and what people do are often very different. 

For example, I ran surveys on an e-commerce checkout flow for a client back in the day, and a majority of the answers mentioned the coupon code field. 

We worked on an experiment to raise the prominence of this field and improve the form field user experience. Conversions dropped. My hunch is people saw this field and then left the site to go search for a coupon code

So then we tried removing the field and only keeping a very subtle text link (“coupon code?”), and conversions increased. 

Even crazier, however, is that the most lucrative experiment I ran was one that was not backed up by research at all. I saw a cool Instagram widget on a different e-commerce site, decided to try that out on my client’s site, and it crushed. 

What people say they want doesn’t always map to what they actually want. 

People don’t know what they’re willing to pay for a product until you ask them to take out their wallet. You don’t truly know who someone will vote for until they pull the lever in the voting booth. People will overstate their interest in high-brow entertainment, but Tiger King was one of the most popular shows of all time. 

It’s not that research is useless; it’s just we need to take it with a grain of salt and build systems that help us learn what people’s revealed preferences are. 

Here are my two heuristics on how to do that:

  1. Prioritize behavioral research
  2. Increase your alpha learning rate

Prioritize behavioral research

“The way to deduce what people want to buy is to simply observe what they DO buy!”

― Gary Halbert

Ronny Kohavi, who ran experimentation teams at Airbnb, Microsoft, and Amazon, shared his hierarchy of evidence:

download (5)

At the bottom, you have anecdotal evidence, and at the top, you have meta-analyses of controlled experiments. 

When I’m doing SEO and content research, I use interviews, surveys, and stated preferences to add color to more objective, behavioral indicators of interest. 

Everyone hates on keyword research nowadays, but it’s truly a revealed preference. Your target audience may say they want to read about something high falutin, but what they’re searching is “what does ABM actually mean?” 

Another insightful research pool is forums and communities.

People are hanging out, asking questions naturally, unaware that you’re watching. So they’re honest, and their questions reflect genuine interest and curiosity.

They don’t suffer from the Hawthorne Effect, which is when the mere presence of a researcher sways the quality of the responses. 

Increase your alpha learning rate

Because everyone is an AI expert now, we should all know what a “learning rate” is in machine learning, right? 

“The learning rate is a tuning parameter in an optimization algorithm that determines the step size at each iteration while moving toward a minimum of a loss function. Since it influences to what extent newly acquired information overrides old information, it metaphorically represents the speed at which a machine learning model “learns.”

This, to me, means we increase our shipping rate. Because the more you ship, the more you learn. 

In experimentation, this means running more experiments (which means lowering the cost of running an experiment, setting guardrails for which experiments are valuable, and building a corpus of knowledge). 

In content marketing, this means increased publication velocity and investing in platforms that are good for “message testing.”

I, for instance, test a lot of my new ideas (including for this newsletter) by using LinkedIn. It costs very little to post something there, and you get rapid feedback on what resonates and with whom it resonates. 

1. You’re Irrational: How to Avoid Cognitive Blind Spots in Qualitative Analysis – Dr. Rob Balon wrote an excellent piece on all the cognitive biases that prevent us from learning from qualitative data. 

2. Mo Data Mo Problems? When More Information Makes You More Wrong –  An old essay I wrote on the perils of ‘swimming in the data’, and why more data doesn’t necessarily lead to better decisions. 

3. Kitchen Side: Learnings from Publishing 100 Podcast Episodes and How We Landed in the Top 10% of Spotify Podcasts – Totally unrelated to the above content, but we did a recap on hitting 100 episodes and it’s filled with fun learnings and wishes for the future of our podcast. 

Alex Birkett

Alex is a co-founder of Omniscient Digital. He loves experimentation, building things, and adventurous sports (scuba diving, skiing, and jiu jitsu primarily). He lives in Austin, Texas with his dog Biscuit.