As many of you know, the mantra for the Business Model Canvas is to get out of the office and interview customers, partners, channels and others. In fact, talking to experts and potential customers is the only true way to reduce uncertainty and to study the value of a product or service. In fact, I believe that it is the basis for all relevant qualitative research in entrepreneurship. As I work actively with the Business Model Canvas, I am convinced that getting out of the office and into the world is only the first small step in the entrepreneurial journey.
Real world data collection and analysis is a key component to reduce the uncertainty of a startup. The starting point is to understand how much is currently known about the problem and what is it worth. What decision will this measurement help us make? Is this an important enough decision to collect more data? Otherwise, what is the value in measuring? Will sufficient additional information be gained from the measurement exercise? If not, why then why bother to measure? What additional value will the measurement add to help with the decision? All of these are crucial considerations. The starting point should not be an identifying what is to be measured, but a reflection of why the measurement is necessary.
The next issue in data collection is to decide what creates a good metric to measure. First, a good metric must be (1) understandable and comparative (shown as a rate or ratio), (2) important to collect and (3) lead to an action directly related to the original required decision. Thus, the results of the data collection should relatively easy to collect, consistent, usable, and can capture information that is relevant to the company.
There are a few simple rules to help an entrepreneur get stated with data. The first set of data is usually exploratory for a startup. Exploratory research means it is okay to through darts. Use the shotgun, throw spaghetti against the wall, see what sticks. At this stage, exploratory data may not have specific decisions for collecting data other than the process of elimination.
The next rule regards checking the data collected and making sure that the right questions were asked. Was the variance of the sample population diffuse enough to provide a good sampling? Did outliers have any effect on the results? Were any assumptions made or any context involved that might invalidate the test?
Another question to ask about collected data is whether it constitutes a leading or lagging indicator? Leading indicators are indicative of future events; lagging indicators follow the event and advise what happened. Also, consider whether the data represents a correlation or causal relationship? A correlation does not mean that one variable or change in variable causes the other. A correlation only indicates that a relationship may exist or not. There just may be some type of association. On the other hand a causal relationship or “cause and effect” means that is, a relationship between two things or events exists if one occurs because of the other.
Measurement tools and data analytics will not bring perfect decisions, but good and appropriate measurement may reduce uncertainty with significant decisions. While hypothesis testing is important in building an effective canvas, it is also important to use suitable and valid measurement tools ( the specifics of these tools will be another blog post).
Here are a few good resources to assist in the development of data skills:
How to Measure Anything Douglas Hubbard focuses on measuring intangibles—the value of patents, copyrights and trademarks; management effectiveness, quality, and public image.
Lean Analytics Alistair Croll and Benjamin Yoskovitz takes a good look into the quantitative side of measurement specifically directed to entrepreneurs.
How to Start Think Like a Data Scientist Thomas Redmond writes a brief NBR article on getting started.
An Introduction to Data-Driven Decisions for Managers Who Don’t Like Math Walter Frick on why data matters.