Background Image

Stuck in the Building

Stuck in the Building

by Annie Lai

Posted on 13 June 2016

October 16, 2012 by startupengineering .

This is a long piece, examining biases and heuristics that undermine customer discovery. More like an essay than a blog post, but, I hope, useful for people who are trying to get the most out of talking with potential customers.

“There are no facts inside your building, so get the heck outside”

Steve Blank, in The Startup Owner’s Manual

“Gentlemen, our resources are limited….at present these resources are undertaking to speak to local people, and to act on what they find. This has led us to confusion.”

“Confusion?” says Marland.

“Confusion sir. By relying on the testimony of individuals, we have exposed ourselves to the prejudices of those individuals, and we have thus been chasing our tails.

Lloyd Shepard, in The English Monster

Talking to customers early and often is the lean startup movement’s prescription to cure the disease of founder fantasy. The disease is real enough. Creative people tend to believe their ideas; insightful people have faith in their insights. Entrepreneurs are generally insightful and creative, so it’s normal for these people to believe they are right and that their product will be the next big thing. Too often, this leads to startups based on plausible but mistaken ideas about who customers are and what they want – a fatal flaw if not rapidly corrected.

How does talking to lots of customers address this problem? Three main ways:

  1. If enough customers tell you you’re wrong, you might start to believe them.
  2. With enough data, anyone can do a better job at pattern matching.
  3. If you have to talk with scores of people, you’ll be forced to move beyond your circle of friends and their friends, which will expose you to responses from more objective strangers and therefore to the real level of interest potential customers may initially have in your ideas.

All of that is helpful, just not terribly helpful. It fails to address some of the key reasons behind “false positives” – validations of hypotheses that are actually invalid. Behavioral economists have done a fairly thorough, science-based examination of how people make decisions, and have concluded that many of the tools we use for decision making simply and demonstrably lead us astray. In the customer discovery context, customer interviews will frequently seem to validate wrong hypotheses. Customers’ predictions about their future buying behavior will be wrong; customer’s insights into their own decision process will be flawed; customer’s inferences about how other people may act will be fictional. Worse, this is not a peripheral phenomenon. You can interview fifty people, and forty of them will tell you they will buy when it’s ready; but only one will, or none. Worse still, you, the entrepreneur, are subject to the same errors and decision-making weaknesses, which compound the errors made by your potential customers.

In behavioral economics, the rules of thumb that we habitually use in making decisions are known as “heuristics.” Some heuristics lead to good decisions in some situations, bad decisions in others; some nearly always lead to bad decisions, and a few usually improve decision making. We’re commonly aware of many of these heuristics, such as overconfidence, over-valuing sunk costs, being paralyzed by too many choices, using hindsight, ignoring the importance of base rates, etcetera. The science of behavioral economics has focused on identifying and defining these heuristics, unveiling where they lead to mistakes, measuring the biases they inject into decisions, and pointing decision-makers toward strategies that minimize errors. This article focuses on five heuristics that are less well known but that play a big role in undermining the value of customer discovery:

  • The Availability Heuristic
  • The Curse of Knowledge
  • Confirmation Bias
  • The Halo Effect
  • Representativeness, and it’s subset, the Conjunction Fallacy

Availability Heuristic

It’s not news that many mistakes get made because people are mentally lazy.[1] Amos Tversky and Daniel Kahneman were pioneers in elaborating, refining, and measuring this phenomenon, which leads us toward certain conclusions, not because they are based on the preponderance of the evidence, but instead because they are based on the evidence that comes most easily to mind. For example, we think robberies are much more frequent if we read about one yesterday, or if for some reason we have a vivid memory of one. We overestimate the frequency of terrorist attacks and celebrity divorces because examples come so easily to mind.

This heuristic effects, and can even be used to manipulate, how people see themselves. In a classic study, researchers examined their subject’s views about how assertive they were. Think for a moment how you would answer the question: “Are you an assertive person?” To answer in an unbiased way, you would have to review a long series of interactions you’ve had with people, over many years, assess them for assertiveness, and come up with some kind of tally and scoring mechanism. Instead of doing all that, most people, when faced with this question, substitute an easier one: “What examples come to mind of me being assertive or not, and what can I conclude from them?” If you happened to have stayed quiet at two business meetings this week, and waited patiently for a table at a restaurant, and backed down in an argument with your spouse, then you would be likely to conclude that you are, in fact, a meek person – even if you were Donald Trump.

Conversely, if you have a harder time recalling something, you will underestimate it. In the study, two groups of people were asked to recall examples of being assertive, and then to assess whether they were assertive people. The first group was asked for four examples, the second group for twelve examples. The first group found it easy to remember four good examples, and they generally concluded that they were assertive. People in the second group found it difficult to come up with twelve examples, and generally concluded that they were meek.[2]

In customer discovery, both the entrepreneur and potential customers will judge value propositions, features, and the likelihood of success based on examples that spring to mind. One entrepreneur we know was trying to support a manager who was moving to a different role. The manager said he wanted a way to pass on operational information. The entrepreneur asked the manager to recall circumstances where he had passed on this kind of information. Through the discussion they realized that the ability to make and capture checklists would be very useful – so useful that the manager said he would buy a product like that. Based on this and similar interviews, the entrepreneur went ahead and developed the software. But when he went back to the potential customers, they weren’t interested in buying. The fact that potential customers could recall instances where the product would have been useful biased them toward believing that they would use it frequently. Unfortunately, those instances were too rare to justify buying and learning to use the software.

The Curse of Knowledge

While it’s easy to make mistakes because you can only remember a few examples of something, the opposite is also true – knowing a lot about their area of expertise can lead entrepreneurs into error. Psychologist Elizabeth Newton’s doctoral thesis, “Overconfidence in the Communication of Intent: Heard and Unheard Melodies”[3] has spurred two decades of research, shedding light on everything from education techniques to advertising, to relationship counseling. Her work was based on a simple, easily repeatable experiment: Two people are given a list of popular songs. One person chooses a song, and attempts to communicate it by tapping on the table between them, and the second person attempts to guess the song. In consistent tests, the tapper predicts that the listener will identify the song about half the time – a 50% accuracy rate. In fact, listeners correctly identify the song at a rate of 2.5% – twenty times less often than predicted. The disparity arises because the tapper hears the song – without the song in his head, he couldn’t tap it out accurately. But once it’s there, he cannot unlearn it. He loses the ability to listen to the taps as the listener hears them, as simply taps, so he loses the ability predict how accurate the listener will be in identifying the song, and wildly overestimates how predictable the song is from the taps. This has broad implications.

When an entrepreneur talks about a prospective product she has it firmly in mind. She can’t unlearn what she “knows” about the product, and will overestimate, by a factor of 20, the hypothetical customer’s likelihood of understanding what it is. This problem may be even worse, because while the listener in Dr. Newton’s experiments were trying to match the tapping to a song thy are already familiar with, the customer is trying to match an unknown product to an unmet need that is itself difficult to pin down. In customer discovery interviews, the curse of knowledge leads entrepreneurs to extreme overconfidence about their how well they have communicated their plans. It is very difficult for entrepreneurs to discount a customer’s comments for the likelihood that those comments are based on misunderstandings.

Confirmation Bias

In a seminal 1991 article, the psychologist Daniel Gilbert founded modern study into the mechanics of belief.[4] Gilbert brought current psychological research to bear on an argument between the philosophers Descartes and Spinoza. Descartes believed that people first perceive and/or comprehend something (an object or an idea) and then assess whether or not they believe it. Spinoza argued that the act of perceiving/comprehending involves believing, and that rejecting an assertion or questioning a perception only comes later.

The research strongly supports Spinoza. When you hear something, you automatically, unconsciously, see it as true. Critical systems, which judge the truth of statements, operate more consciously and only go into action after the words are understood and believed – comprehension and belief arrive first and together; judgment and disbelief follow. Some consequences are that if people are told something and then distracted, the distraction will interfere with critical thinking and they will be more likely to believe it. Considerable evidence shows, for example, that people are more likely to believe the claims in commercials if they are tired or distracted.

In customer discovery interviews, customers are working to understand what the entrepreneur is describing. That work involves believing that what the entrepreneur says is true. If the use scenario has multiple steps, or the product has multiple features, then the customer is likely to unconsciously stipulate the truth of the first step or feature, just so that she can comprehend it well enough to make sense of the next step of feature.

The Halo Effect

The Halo Effect was first described and tested in detail by the psychologist Solomon Asch in 1946.[5] Kahneman describes the effect as “exaggerated emotional coherence.” Asch’s experiments showed that we use first impressions, or primary impressions, as guiding principles to interpret additional information, striving to create a coherent “story” about whoever or whatever we are trying to comprehend. If you appreciate Lance Armstrong for his work with cancer patients, you will tend to interpret doping allegations as sour grapes from his lesser rivals. Conversely, if you think of him as egotistical or coldly mechanical, you’ll be more likely to think he cheated. This effect manifests itself almost instantly. In one of Asch’s key experiments, he gave subjects descriptions of two people, and asked them to write short descriptions of each one. Here are the people:

Alan: intelligent – industrious – impulsive – critical – stubborn – envious

Ban: envious – stubborn – critical – impulsive – industrious – intelligent.

The descriptions are identical, but the order of presentation makes all the difference. Subjects had much more positive impressions of Alan than Ben just because they were exposed to the good traits first. Interestingly, Alan’s negative traits were interpreted as less negative in the light of the positive ones. “Well, of course Alan is critical; he’s smarter and works harder than other people, so it’s natural.” The same effect operates in reverse for Ben “Yes he’s industrious, he has to work hard to be better than everyone else.”

In customer discovery, the Halo Effect distorts reactions both overall and in respect to features or aspects of the plan. Potential customers can think “This entrepreneur seems really smart, her idea must be good.” The can also let a positive impression of one part of the business model color the others: “Customers will love this, so partners should be found easily.

The Conjunction Fallacy

Tversky and Kahneman’s most famous experiment was called the “Linda Experiment.” In one version, large groups of people are presented with a description of Linda and a question. Here’s the description: “Linda is thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.”

Here’s the question:

Which alternative is more probable: 1) Linda is a bank teller. 2) Linda is a bank teller and is active in the feminist movement.

Respondents overwhelming pick #2 – it fits the description so much better. Of course, it is logically impossible. #2 is a subset of #1, so there is no way #2 is more probable.

Twenty five years of experiments that follow up on the Linda experiment have yielded some important conclusions. From a customer discovery perspective, maybe the most important one is that adding details to a story has a contradictory effect – on one hand, it makes the story seem more likely to be true. On the other hand, it makes the story LESS likely to be true. Describing a fantasy product in detail “It’ll do this, this and this, people will buy it here, and take it out to use when this happens…” conjures up a coherent, reasonable scenario. The better an entrepreneur elaborates this kind of story, the more likely potential customers will buy into it. Not because it is true, but because people naturally jettison the hard question “Are each one of these assertions true?” and substitute an easier question “Does this story as a whole make sense?” Avoiding the conjunction fallacy requires keeping things as simple as possible.


Steve Blank’s definition of a startup is a “temporary organization designed to search for a valid business model under conditions of extreme uncertainty.” It’s the “extreme uncertainty” part that amplifies the effect of every one of these fallacies and heuristics. Uncertainty is why entrepreneurs need to do customer discovery in the first place. But customers are uncertain too. They may have very powerful needs that will ultimately catapult the business to greatness, but those needs are probably poorly defined, because if they were well-mapped, then established companies would already be filling them.

Customer discovery is designed to map new territory, and new territory is exactly where people are most likely to misunderstand themselves, substitute easier questions for harder ones, buy into false but coherent stories, and leap to conclusions based on inadequate evidence. Talking to lots of customers is a start, but only a start, down the path toward a valid business model.

[1] Of course, there are good reasons for mental laziness. Perhaps the most important one is that using less effort means that people can make decisions more quickly. In numerous situations, coming to a quick decision, even if it may not be right, is preferable to coming to no decision at all.

Availability: A Heuristic for Judging Frequency and Probability,” Amos Tversky and Daniel Kahneman 1973.

[2] Norbert Schwartz et al., Ease of Retrieval as Information. http://sitemaker.umich.edu/norbert.schwarz/files/91jpspschwarzetal_ease.pdf

[3] http://www.citeulike.org/group/8357/article/9267546

[4] How mental systems believe. Gilbert, Daniel T. American Psychologist, Vol 46(2), Feb 1991, 107-119. doi: 10.1037/0003-066X.46.2.107

[5] Forming Impressions of Personality,” Solomon E Asch, Journal of Abnormal and Social Psychology 41 (1946) http://www.all-about-psychology.com/solomon-asch.html