What is Validation?

What constitutes validation is one of the first, essential questions we ask in our work with entrepreneurs using the Business Canvas Model (BMC). We also discuss value proposition, targeting customers, and product/market fit. However, validation for all of the components in the Canvas is never really defined.

Validation is the ultimate goal of the process using the BMC. Through validation we are looking to determine the sufficient number of paying customers that creates a market adequate enough to create a business opportunity.

Entrepreneurs do not have the luxury of knowing how many paying customers they have when beginning to pursue an opportunity. Many teachers of entrepreneurship, including Steve Blank and those at the Innovation Corp program at the National Science Foundation, claim that an entrepreneur needs to talk to at least 100 customers in order to reduce the uncertainty surrounding a startup. In fact, the more confirmation achieved, the less the uncertainty. The specific number of 100 is likely derived from qualitative studies performed by social scientists who claim that a population of 100 in a survey makes for a valid survey. In reality, most entrepreneurs find that after talking to about ten customers, the results tend to be the same. So, what constitutes a sufficient number of interviews?

In the past, I have stated that entrepreneurs should interview as many people as necessary to confirm the valid, the uncertainty of the market to a comfortable level. Of course, that omits the concept of confirmation bias. Entrepreneurs need to be mindful to avoid thinking: “My invention is great, so I’ll do anything to make the numbers believable.” Do not fall prey to your own lies, damn lies and statistics.

In research methodology, validity is the soundness of the design of each test and methodology used. Validity shows that the findings/results truly represent the phenomenon claiming to be measured. We cannot talk about validity without discussing reliability. Can the test be repeated or replicated with another population and obtain similar results? Is the test inherently repeatable?

Entrepreneurial validity means using good methods to test hypothesis obtaining data with observable facts that can be measured and are relevant. In addition, the test results must end up with a binary result. The test either passes or fails. There is no “close enough” response. As Yoda says, “Do or do not. There is no try.”

During hypothesis testing, an entrepreneur must “draw the line in the sand.” Ask whether your metric provides you with a level of success that gives rise to doubling down and taking the next action step? Can you tell the difference between complete failure and overwhelming success? Where does your opportunity fall?

What happens if you land close to the line in the sand but do not pass over? There are two possible scenarios: One would come from the norms of your industry. In other words, understanding how your competitors view this metric would provide the necessary knowledge to take action. The second should come from the business model. How many positive responses are required in order to be successful?

Now that you understand what validity is and why it’s important, make sure that you understand exactly how to test for validity. Testing for validity must correspond to the extent to which a concept, conclusion or measurement is well-founded and corresponds accurately to the real world.

 

Hypothesis Testing For Entrepreneurs

Hypothesis testing appears to be a simple task. Just write down a question, devise a methodology to test it, elicit a response and analyze the results. Some entrepreneurial experts suggest that these tests must be pass or fail. In other words, either the hypothesis is true or it is not. In my experience pass/fail questions created without consideration of other factors is not effective.

For example, Team A reports: “Well, we thought we would get a 50% hit rate, but only got as high as 38%. That is good enough. We pass the test.” Did Team A pass the test?

The first two rules of entrepreneurship are (1) to be honest with yourself and (2) learn from your mistakes. Team A just violated both rules. First, they justified their projected hit rate and were not honest with themselves about what that really meant to their company. Secondly, they didn’t learn from the exercise. They never found out WHY they only had a 38% hit rate, rather than their predicted 50%. This is a terrible, missed opportunity. Why did they originally believe that they could get 50%, and why didn’t that occur? What needs to be changed? Can it be changed? Is it the test or the product? There are too many important questions in this scenario that will never be answered.

One interesting model for creating a more quantifiable hypothesis testing is the HOPE model. This model looks at four factors:

Hypothesis: What is your theory? Is it both “falsifiable” and quantifiable?

Objective: Are your tests objective rather than subjective?

Prediction: What do you think you will find?

Execution: How are you going to test?

The most important element of creating a hypothesis is that it must be “falsifiable.” That means your guess can be rejected after an initial experiment of the hypothesis. If your plan is to see what happens, then your hypothesis will always be true.

Second, all hypotheses should be quantifiable. In other words, you must be able to predict, account, and analyze your results. A good hypothesis includes both a question and good methodology to uncover the results. After determining the question and developing your methodology, you should then run a test to analyze the information obtained.

Additionally, your tests must have a good source of data, as well as represent your demographic population as accurately as possible. Your results should be objective rather than subjective.

Conducting good tests is a subject unto itself, and requires a more lengthy discussion than this blog entry addresses. I will save that for another day.

In my work with both scientists and entrepreneurs, the predictive element is often missing in hypothesis testing. This is even true of scientists and economists who use hypothesis testing on a regular basis. Included within a good hypothesis test must be a predictive indicator of the results. A predictive indicator might include how fast an event might occur and whether there are any stress points in the experiment and where the stress might be located. I believe that failure to quantify your results may mean that the hypothesis is not completely tested, and the result is incomplete. However, if you place a value or a number in the hypothesis, you can learn more about how close you came to hitting the mark.

Without quantifying hypotheses there is a tendency to justify the data to fit the results. In analyzing the results, teams need to be careful to differentiate between causation and correlation. For example, more ice cream is sold in the summer. More people drown in the summer. Therefore, they must be related. Of course, they are not.

Scientists and statisticians also discuss null hypothesis—a hypothesis that is assumed to be true, (e.g. in a courtroom, the defendant is presumed innocent until proved guilty) as opposed to alternative hypothesis—a statement that contradicts the null hypothesis (e.g., the courts would rather the guilty go free than send innocents to jail). What I am advocating in statistical terms is a criterion of judgment based on probability in quantifiable statements. For example, in the courtroom jurors would be asked to determine “beyond a reasonable doubt” whether the defendant is guilty.

So, in your hypothesis testing, will your test confirm beyond a reasonable doubt that your hypothesis is true? If you tested correctly, then you know the honest answer and just reduced the uncertainty of moving forward with your enterprise.