What is Validation?

What constitutes validation is one of the first, essential questions we ask in our work with entrepreneurs using the Business Canvas Model (BMC). We also discuss value proposition, targeting customers, and product/market fit. However, validation for all of the components in the Canvas is never really defined.

Validation is the ultimate goal of the process using the BMC. Through validation we are looking to determine the sufficient number of paying customers that creates a market adequate enough to create a business opportunity.

Entrepreneurs do not have the luxury of knowing how many paying customers they have when beginning to pursue an opportunity. Many teachers of entrepreneurship, including Steve Blank and those at the Innovation Corp program at the National Science Foundation, claim that an entrepreneur needs to talk to at least 100 customers in order to reduce the uncertainty surrounding a startup. In fact, the more confirmation achieved, the less the uncertainty. The specific number of 100 is likely derived from qualitative studies performed by social scientists who claim that a population of 100 in a survey makes for a valid survey. In reality, most entrepreneurs find that after talking to about ten customers, the results tend to be the same. So, what constitutes a sufficient number of interviews?

In the past, I have stated that entrepreneurs should interview as many people as necessary to confirm the valid, the uncertainty of the market to a comfortable level. Of course, that omits the concept of confirmation bias. Entrepreneurs need to be mindful to avoid thinking: “My invention is great, so I’ll do anything to make the numbers believable.” Do not fall prey to your own lies, damn lies and statistics.

In research methodology, validity is the soundness of the design of each test and methodology used. Validity shows that the findings/results truly represent the phenomenon claiming to be measured. We cannot talk about validity without discussing reliability. Can the test be repeated or replicated with another population and obtain similar results? Is the test inherently repeatable?

Entrepreneurial validity means using good methods to test hypothesis obtaining data with observable facts that can be measured and are relevant. In addition, the test results must end up with a binary result. The test either passes or fails. There is no “close enough” response. As Yoda says, “Do or do not. There is no try.”

During hypothesis testing, an entrepreneur must “draw the line in the sand.” Ask whether your metric provides you with a level of success that gives rise to doubling down and taking the next action step? Can you tell the difference between complete failure and overwhelming success? Where does your opportunity fall?

What happens if you land close to the line in the sand but do not pass over? There are two possible scenarios: One would come from the norms of your industry. In other words, understanding how your competitors view this metric would provide the necessary knowledge to take action. The second should come from the business model. How many positive responses are required in order to be successful?

Now that you understand what validity is and why it’s important, make sure that you understand exactly how to test for validity. Testing for validity must correspond to the extent to which a concept, conclusion or measurement is well-founded and corresponds accurately to the real world.

 

Corporate Innovation – Back to The City

I recently attended a Chief Innovation Officer conference in New York. My goal in attending this event was to learn more about how corporate ventures manage their innovation processes and what tools they currently use to develop and attain innovative processes and new products. I was glad I attended because I learned a few useful pieces of information. However, I felt that there were some concerns I have about managing innovation that were not covered in the conference.

Let me start with what was addressed:

Culture is a key tool to creating an environment of innovation success. Many of the speakers talked about the difficulty in moving that big ship called bureaucracy and focus on innovation.

Many of the speakers felt that not everyone in the company needs to focus on being innovative. I, personally disagree with this. Everyone in any company can always add value to his/her job, department, and processes. Just make it easy to suggest change, and allow employees to play with new ideas and concepts. This should be rewarded not punished in both success and failure.

Open innovation is still daunting to many companies, but it is beginning to gain acceptance. The basic tenet of open innovation is the use of external ideas to advance their technology. A number of issues inhibit open innovation: intellectual property issues, ownership, field of use, and confidentiality issues all play a restraining role. However, I was intrigued with the use of crowdsourcing to help with project work. One of the speakers found success through competitions that are external to the organization that uses gaming techniques to entice experts to compete and help the organization find the best solutions. This helps to provide “A” level talent, including workers that prefer not to work steady hours, to help companies solve problems faster and cheaper than a hiring process might provide.

Failure as a long-term learning strategy was not celebrated, nor discussed much because there still is a strong focus on short-term achievements. In the corporate world, companies are seeking 3-5 year payouts from innovation. This means that incentives and reward structures are geared toward execution on known outcomes rather than a focus on a disruptive or even iterative innovation.

Corporate opportunity recognition is still a struggle. How far innovation can successfully deviate from current strategy into adjacent markets is a difficult decision for many large companies.

One of the more interesting points the concept of focusing on a “quest.” Quests are driving forces for firm’s strategy that allows for innovative ideas and adjacent marketplaces. It is the aspirational mission of the firm. This could even allow for an oddball type of product line. One example provided was Redbull, which is clearly in the refreshment market but also important in arranging airplane racing competitions and other high-powered sporting events, such Formula One racing and other sports ownerships, and partnering with game companies (such as “Call of Duty” and “Destiny”), the HALO jump, and web marketing. The quest is that, “Red Bull helps more of us live our lives to the extreme” uses storytelling combined with action to illustrate their quest. Red Bull’s quest brings their entire product and brand lines together.

There was also a focus on data-driven innovation. This attempts to make use of big data so that intrapreneurs in an organization can become experimental. This was quite the opposite of what I might expect. Strategic innovation doesn’t necessarily have data, but rather ambiguity and uncertainty.

One of the better concepts reminded me of a Kodak moment. Kodak invented the digital camera and its failure to adapt and take on this innovation led to its downfall. The lesson: innovate or die and don’t have that Kodak moment.

The Kodak moment is also about understanding opportunities. That is where the Alex Osterwalder model in lean startups models is key. Even large companies need to try to understand how to better evaluate when an innovation is key to their strategy.

Here is what I would like to see or wished was on the agenda.

For the most part, I was disappointed in how little the large companies appeared to understand how to create an innovative culture. No one talked about learning from failures, and no one really discussed that innovation is a process that must be ingrained into the culture with a reward system for trying.

The goals of many of the Chief Innovation Officers were mostly short-term and revenue driven. I heard ROI on innovation too often. This translates to small incremental wins, no home runs or disruptive innovation and most importantly, the unwillingness to take risks. I personally don’t like the term risk when talking about innovation. I prefer the term “reducing uncertainty.” Risk can be measured and may fit the mindset of a large company, but the real goal is to reduce the uncertainty that cannot be exactly measured. However, uncertainty can methodically be calculated within a statistical range of probability. I concede that there are perils in trying to predict the future. Although we can’t forecast the future, but we can work the means and ways to get to a better future.

Jeff Bezos said, “Advertising is a tax you pay for lack of innovation.”

Hypothesis Testing For Entrepreneurs

Hypothesis testing appears to be a simple task. Just write down a question, devise a methodology to test it, elicit a response and analyze the results. Some entrepreneurial experts suggest that these tests must be pass or fail. In other words, either the hypothesis is true or it is not. In my experience pass/fail questions created without consideration of other factors is not effective.

For example, Team A reports: “Well, we thought we would get a 50% hit rate, but only got as high as 38%. That is good enough. We pass the test.” Did Team A pass the test?

The first two rules of entrepreneurship are (1) to be honest with yourself and (2) learn from your mistakes. Team A just violated both rules. First, they justified their projected hit rate and were not honest with themselves about what that really meant to their company. Secondly, they didn’t learn from the exercise. They never found out WHY they only had a 38% hit rate, rather than their predicted 50%. This is a terrible, missed opportunity. Why did they originally believe that they could get 50%, and why didn’t that occur? What needs to be changed? Can it be changed? Is it the test or the product? There are too many important questions in this scenario that will never be answered.

One interesting model for creating a more quantifiable hypothesis testing is the HOPE model. This model looks at four factors:

Hypothesis: What is your theory? Is it both “falsifiable” and quantifiable?

Objective: Are your tests objective rather than subjective?

Prediction: What do you think you will find?

Execution: How are you going to test?

The most important element of creating a hypothesis is that it must be “falsifiable.” That means your guess can be rejected after an initial experiment of the hypothesis. If your plan is to see what happens, then your hypothesis will always be true.

Second, all hypotheses should be quantifiable. In other words, you must be able to predict, account, and analyze your results. A good hypothesis includes both a question and good methodology to uncover the results. After determining the question and developing your methodology, you should then run a test to analyze the information obtained.

Additionally, your tests must have a good source of data, as well as represent your demographic population as accurately as possible. Your results should be objective rather than subjective.

Conducting good tests is a subject unto itself, and requires a more lengthy discussion than this blog entry addresses. I will save that for another day.

In my work with both scientists and entrepreneurs, the predictive element is often missing in hypothesis testing. This is even true of scientists and economists who use hypothesis testing on a regular basis. Included within a good hypothesis test must be a predictive indicator of the results. A predictive indicator might include how fast an event might occur and whether there are any stress points in the experiment and where the stress might be located. I believe that failure to quantify your results may mean that the hypothesis is not completely tested, and the result is incomplete. However, if you place a value or a number in the hypothesis, you can learn more about how close you came to hitting the mark.

Without quantifying hypotheses there is a tendency to justify the data to fit the results. In analyzing the results, teams need to be careful to differentiate between causation and correlation. For example, more ice cream is sold in the summer. More people drown in the summer. Therefore, they must be related. Of course, they are not.

Scientists and statisticians also discuss null hypothesis—a hypothesis that is assumed to be true, (e.g. in a courtroom, the defendant is presumed innocent until proved guilty) as opposed to alternative hypothesis—a statement that contradicts the null hypothesis (e.g., the courts would rather the guilty go free than send innocents to jail). What I am advocating in statistical terms is a criterion of judgment based on probability in quantifiable statements. For example, in the courtroom jurors would be asked to determine “beyond a reasonable doubt” whether the defendant is guilty.

So, in your hypothesis testing, will your test confirm beyond a reasonable doubt that your hypothesis is true? If you tested correctly, then you know the honest answer and just reduced the uncertainty of moving forward with your enterprise.

Decision Making with Data and Measurement

As many of you know, the mantra for the Business Model Canvas is to get out of the office and interview customers, partners, channels and others. In fact, talking to experts and potential customers is the only true way to reduce uncertainty and to study the value of a product or service. In fact, I believe that it is the basis for all relevant qualitative research in entrepreneurship. As I work actively with the Business Model Canvas, I am convinced that getting out of the office and into the world is only the first small step in the entrepreneurial journey.

Real world data collection and analysis is a key component to reduce the uncertainty of a startup. The starting point is to understand how much is currently known about the problem and what is it worth. What decision will this measurement help us make? Is this an important enough decision to collect more data? Otherwise, what is the value in measuring? Will sufficient additional information be gained from the measurement exercise? If not, why then why bother to measure? What additional value will the measurement add to help with the decision? All of these are crucial considerations. The starting point should not be an identifying what is to be measured, but a reflection of why the measurement is necessary.

The next issue in data collection is to decide what creates a good metric to measure. First, a good metric must be (1) understandable and comparative (shown as a rate or ratio), (2) important to collect and (3) lead to an action directly related to the original required decision. Thus, the results of the data collection should relatively easy to collect, consistent, usable, and can capture information that is relevant to the company.

There are a few simple rules to help an entrepreneur get stated with data. The first set of data is usually exploratory for a startup. Exploratory research means it is okay to through darts. Use the shotgun, throw spaghetti against the wall, see what sticks. At this stage, exploratory data may not have specific decisions for collecting data other than the process of elimination.

The next rule regards checking the data collected and making sure that the right questions were asked. Was the variance of the sample population diffuse enough to provide a good sampling? Did outliers have any effect on the results? Were any assumptions made or any context involved that might invalidate the test?

Another question to ask about collected data is whether it constitutes a leading or lagging indicator? Leading indicators are indicative of future events; lagging indicators follow the event and advise what happened. Also, consider whether the data represents a correlation or causal relationship? A correlation does not mean that one variable or change in variable causes the other. A correlation only indicates that a relationship may exist or not. There just may be some type of association. On the other hand a causal relationship or  “cause and effect” means that is, a relationship between two things or events exists if one occurs because of the other.

Measurement tools and data analytics will not bring perfect decisions, but good and appropriate measurement may reduce uncertainty with significant decisions. While hypothesis testing is important in building an effective canvas, it is also important to use suitable and valid measurement tools ( the specifics of these tools will be another blog post).

Here are a few good resources to assist in the development of data skills:

How to Measure Anything Douglas Hubbard focuses on measuring intangibles—the value of patents, copyrights and trademarks; management effectiveness, quality, and public image.

Lean Analytics Alistair Croll and Benjamin Yoskovitz takes a good look into the quantitative side of measurement specifically directed to entrepreneurs.

How to Start Think Like a Data Scientist Thomas Redmond writes a brief NBR article on getting started.

An Introduction to Data-Driven Decisions for Managers Who Don’t Like Math Walter Frick on why data matters.