Assumption testing - right or wrong? close
Once a MVP (minimum viable product) has been launched and its market relevance validated, the next urge we have as developers and marketers is usually to add new features. No matter how well a product does initially, the assumption is that if we build new features, new users will come. Unfortunately, this is a common cycle in product innovation which ignores one of the key factors for success: assumption testing.
If we’re honest, how we develop a product is usually based on our own assumptions, a method which is flawed from the outset. When you build a product to respond to a particular customer need, you assume that you know how the customer is going to use it. Contrary to this, it is often the unexpected way that a customer uses a product, that highlights it's true potential. Instagram, which started life as check-in app Burbn and Youtube which was originally a video service for the dating market, are both good examples of this. If the Youtube founders hadn't realised that its users were more interesting in seeing Janet Jackson's wardrobe malfunction than 'hooking up' with other singletons, who knows if it ever would have evolved into the success story it is today.
Assumption testing ensures that the decisions we make are based on how the customer actually behaves, rather than how we think they should behave.
To take Twilert, our Twitter monitoring app, as an example, we had been running it as a monetised product successfully since 2013 but had always aspired to develop it further.
When we sat down to look at the product redesign we were keen to begin adding new features but our analysis showed that while we were averaging around 1000 trial sign-ups a month (all of our plans start with a 15 day free trial), only 1.1% were converting into paying customers. When you see stats like that, it’s difficult to justify the development of new features.
Naming our assumptions
Knowing our product and being completely honest about the areas we thought could use improvement, we decided that our primary assumptions on the low conversion rates were:
Trialists don’t understand how to use Twilert
Trialists are not seeing any results because they’re not setting up their searches effectively (and therefore they don’t think it works)
Users need more features to justify the spend
To validate these assumptions we decided to look at a large section of trialists over a period of time and find out exactly what they were doing when they logged in.
What we learnt:
Aside from finding out that there were a lot of Justin Bieber fan accounts using Twilert, our key learnings were:
12% of users weren’t setting up their search alerts correctly
34% of trialists abandoned their accounts moments after logging in and didn’t progress to set up any alerts
89% of users were only setting up a single Twilert
While some of our original assumptions were correct, it was clear that some were misplaced. We assumed that the main cause of the low conversion rate was user error and that customers didn’t understand how to use Twilert properly. This was still an issue, but with 34% of trialists abandoning ship moments after login, the problem seemed to begin even earlier in the onboarding process. It also showed a flaw in our business model as 89% of trialists were only setting up one Twilert (which they could keep for free anyway), showing that we perhaps needed to give further education on the different ways to use Twitter search monitoring for lead gen purposes.
This led to a complete rethink on the onboarding process and the way in which we engage and educate users. Without this process, we could easily have gone down a road which would have led to a new and improved product that still didn't help customers to successfully use the product.
The key takeaway from this is that while we may think we know our audience and their needs, it’s quite possible that we don’t. Proceeding with changes based on assumption is like trying to run without tying your shoelaces: you may make it off the starting block but somewhere along the way; you're going to fall down.
At every stage of a product it’s important to look at what you believe to be true and then test it against what is true by following the user journey, speaking to customers and user testing. Only through this process, can we be clear that the product we’re building is for the user and not just our own assumptions.