7 Steps to Running a Successful MVP Experiment and De-Risking Your Next Idea (🍕#2)

How to built the smallest possible version of your product, using the least amount of resources, so you can get the most amount of real feedback on the problem you're solving - and possible solution.

👋 Hi, I'm Jaryd. Welcome to this weeks Slice — the free weekly newsletter for PMs and founders about product, startups, growth and people.

If you’re not as subscriber already, join below. It’s free. 👇

If you enjoy this newsletter, and know someone else who also might, you can share it below👇

Share The Product Slice

When reflecting on the early days at my first startup, my naivety becomes shocking clear to me.

One of the most apparent mistakes I made as an inexperienced founder was executing on an idea just because I thought it was great, without any proper validation, and taking a full product to market before I had a single user. The foolishness of that makes me cringe. 

There was a clear financial cost to going ahead without any validated learning. I wasted over 6 months building the product, getting it ready for market, until finally when it shipped, nobody used it. Much like the other 42% of innovations that fail due to long development times.

The implicit cost of not testing my assumptions initially was a year of opportunity cost — 6 months building a product I would later have to rebuild, plus 6 months I could have spent building something else.

As always, “failure” is only failure when you fail to learn. 

Thankfully I did, and here is what I wish I knew back then about validating your idea through a minimum viable product experiment. 

First, let’s define a minimum viable product. 

Eric Ries, defined an MVP in his book, “Lean Startup”, as “that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort”.

In an MVP experiment you are building the smallest possible version of a product, using the least amount of resources, so you can get the most amount of real feedback from users/customers about if your idea will work.

1. Figure out the problem/solution set

If you have an idea, it has to have come from somewhere. Either you’ve experienced a problem personally, or you’ve seen it elsewhere, and you have an idea as to how you can solve it.

This step is the easiest, and all you have to do is write down what you believe to be the problem and what your proposed solution is.


Problem: Getting a taxi in San Francisco sucks!

Solution: An app that allows you to hail a taxi in under 10 minutes.

2. Identify and rank your assumptions

The truth is we don’t know anything until we test it. And assumptions are the foundation of knowing what we need to test in a startup idea or a new product. 

So, how do you work out your assumptions? Ask yourself this, “In order for this idea to be successful, what must be true?”

I believe that the core assumptions around any idea are (1) that the customer has the problem, (2) that your key value proposition as a solution matters, and (3) that they will pay for it.

The best way to get clear on your assumptions, and their priority to validate, is to write them down. I suggest using Google Sheets. All you need to start is three column headings: (1) Assumption, (2) Risk, and (3) Complexity. We will build on this later.

If you have several assumptions that have to be true in order for your idea to be successful, you should prioritize testing the ones that pose the biggest risk. Your riskiest assumption will be the one that absolutely has to hold true, otherwise your product will fail. This is usually if the customer faces the specific problem you’re wanting to solve.

Next to your list of assumptions, in the “Risk” column, rank each as either (High, Medium, Low). Then, under “Complexity”, and using the same ranking system, list how much effort/resources would be needed to test this.


3. Build testable hypotheses

Now that you have a list of assumptions written down, you can build a hypothesis (a testable statement of what you believe to be true) for each that you can test in your MVP experiment.

The difference between your assumptions and hypotheses are that the hypotheses are actionable, have a target customer, and an expected outcome.

If this is a new product, the format of your hypotheses would be something like this. 

We believe that [target customer], will [predict action/outcome], because [reason].

If you’re a product manager working on an existing product, you’ll be iterating to improve on some metric and a more robust way to write a hypothesis is one that encapsulates the specific result you want to measure. For example, you could add to the end of the above hypothesis: 

…If we [action], this [metric] will improve.

In your Spreadsheet, add a new column title, “Hypothesis”, and write out the testable statement next to each assumption. Some assumptions might blend together, but that’s to be expected.


4. Establish Minimum Criteria for Success

Without setting a Minimum Criteria for Success (MCS), your experiments will have less clarity and meaning and it will be much harder to decide if your hypothesis is true enough and the idea is worth pursuing.

Essentially, you should consider the cost and benefit of a product/feature, identify the metrics that signal customer interest, and set goals around each KPI that reflect the point where the benefit is greater than the cost. A hypothesis is proven false if you don’t meet the MCS in a specified timeframe.

Going back to your spreadsheet, add theses in a new column for each hypothesis. 

5. Choosing your type of MVP

With an MVP, the idea is to be as creative as possible. How can you do more research, for less?

A few examples of MVPs:

The Landing Page

Create a single page that explains your value proposition, with a clear call-to-action (such as joining a waiting list), and then drive traffic there. A/B test with different messages and see the response from your target audience. You will (1) get some indication if people are interested in your product’s benefits and (2) be collecting interested users to speak with.

The Shadow Button

Instead of building a whole new feature show a button that appears to direct users to it, and on-click, displays a coming soon message. You should track button clicks (create an event on Google Analytics or Mixpanel) that signals interest.

The Prototype 

As a founder with a deep interest in design, this is my preferred type of MVP. If you have design skills or a designer on your team, try creating a functional prototype of your idea. This looks and feels like the end product, and it allows users to actually play around with your “product”. You can try tools like Framer or Marvel (who I have no affiliation with).

The Concierge Service

This is essentially the least scalable version of your product, where you manually perform the benefit of your product for a small number of people. For example, say you were the first ride-hailing app. You could give a few people your number, and tell them whenever they wanted a ride they could just SMS you (the driver of the taxi) and it would show up. You get to work closely with your customer, gain insight, and identify demand by seeing how much work you’re doing.

The Piecemeal

Instead of building a product, just use out-the-box software and “piece them together” to create a makeshift version of the function you need to test.

6. Executing the experiment 

As Michael Siebel, CEO at Y Combinator, suggests, a MVP shouldn’t take you more than 4 weeks to build. The idea is to move quickly and lean, allowing you to fail fast, learn, iterate, and run more experiments until you validate your key hypothesis.

This is not the stage to be married to your idea, it inevitably will change as you gain more insight into the problem, customer, and solution. 

With an MVP, you need to be accountable for the time and resources you’re investing in it. When you start, you have an idea for what the MVP will be. But, as you’re building it and thinking  about it, it’s easy to start adding more. This is known as feature creep, or scope creep. 

Suddenly the 4 weeks becomes 6, then 8, then 12. 

A useful way to avoid this is to write out exactly what the MVP needs to do (the scope). Be specific, and list only the required features that you are technically able to build in 4 weeks. As a product manager, it’s useful to write out user stories and acceptance criteria for this MVP. 

While running the experiment, it’s crucial that you are still speaking to potential customers.

7. Evaluating and learning

After you’ve gathered your data you’ve been measuring from your MVP experiment, you need to compare that to your Minimum Criteria for Success.

The data you’ll likely get from your MVP is quantitative data (i.e number of sign ups, button clicks, etc). A great way to make your decisions is to blend that with qualitative data from your user interviews as this helps you understand the reason (the why) behind the customer behaviour. 

With all this data, you want to try to figure out what worked, what didn’t, and how you’re going to iterate on this version of the product.

Even if there was some success, hold off green-lighting the entire idea/product yet. This is the time to mitigate consequential risks by making affordable and fast changes for a small user base. 

Apply what you’ve learned in your first build, run another MVP experiment, and keep validating until you know you’re solving a real, commercial viable, customer problem.

Every 2 weeks, I share my experiences, insights and perspectives on product, growth, people, and anything else that I’ve found helps me have a happier and more meaningful career in product management.
Subscribe now to get the latest in your inbox.

Leave a comment