Startups Need to Monitor Customer Usage Metrics

We were recently asked how to validate that an MVP  (Minimally Viable Product) is working. When we say “validate an MVP,” what we mean is to have some proof that you have a product the market wants, that product has been acceptably configured, and it’s now time to start figuring out how to scale sales. If you want to connect this post to The Titanic Effect book, we are looking at the First Customer stage in Chapter 5 – The Technical Ocean. 

 

One approach you can use is to do market research on the MVP. This takes the MVP from internal to external testing. There are two main ways to do this market research:

  1. User Testing: Bring people into a test facility, give them the product, and then get them to share what they like/don’t like, what works/doesn’t, and what they would like to see improved. 

  2. In-Market Testing: Instead of bringing people in to use the product, you send the product out to people to use for free. Then you ask them to complete surveys and other tools to monitor their usage, preferences, reactions, and suggestions for improvement. 

User Testing is usually shorter and easier to implement. But it can be kind of artificial since people are not actually using the product exactly the way they would in their daily lives. They will quickly show you where the bumps and warts are in product function and user experience, though. In-market testing can be more complete, but it takes longer and is more expensive. And sometimes prospects take the product and don’t do the research. So you have to recruit extra people. But the information you get back can be more helpful, especially if these beta users disclose how and if they were able to break the product. Randy Hetrick of TRX did a combination. He set up product testing at local gyms on Friday evenings. In exchange for their favorite libations, his beta users both tried to break TRX and also came up with some original ideas about how to use it.

One of the most underused approaches for validating an MVP, though, is monitoring how customers actually interact with the product. If the product is software, metrics about usage are key. Many startups invest a lot in Google Analytics to see who’s coming to their websites; however, they fail to invest in actual app/software analytics, customer use metrics, or in-market customer monitoring. Even if these metrics do exist in the product, the startup may not be monitoring and evaluating them.

For example, we were working with a service provider who ran different kinds of events with different times and different costs. People signed up for each event individually. The registration software provided an Excel report of registrants and attendees after each event. But what this startup hadn’t done is compile those individual event reports so they could look across the events. Once they put all of the data in a customer relationship management database (think of an Excel file where the rows are people and the columns are events), they discovered a treasure trove of insights:

  • How many total unique registrants there were – 2,000 different customers

  • How many had registered for more than 1 event – 50% of customers

  • How many had registered for all events -  <5% of customers

  • The average # of events customers registered for – 2

  • The average # of attendees based on topic, length, and cost - varied

These insights led them to figure out how to encourage repeat attendance, identify topics that needed to be improved, and even discover which days and times to avoid. And it validated that there was a market for the service!

The average smartphone user has downloaded nearly 100 apps to their phone. However, they tend to use only nine per day and thirty per month.[1] These statistics have been fairly stable for the last few years. Consider how better understanding customer usage could improve the chances that any one app, software, platform, or website gets better utilization. 


[1] https://techcrunch.com/2017/05/04/report-smartphone-owners-are-using-9-apps-per-day-30-per-month/

Customer Usage.jpg

Here are some good metrics to monitor:

  • New users – Are you attracting new users? If not, you need to figure out the barriers to adoption. It might be the product itself or it might be the promotion of it. 

  • Returning users – Do people sign in more than once? If not, why not? And how long are customers retained and continuing to use product?

  • Frequency of usage – Are they using it regularly or intermittently? What’s the time increment between usage?

  • Time spent – Are they doing quick check-ins or spending lots of time? Customers at these two extremes would be good candidates for more in-depth interviews to see what they are doing and why.

  • Features that are being used a lot vs. a little –  You will want to explore why some features are not well-used.

One marketplace startup had lots of new users, but no one had used the software to buy anything – zero product sales. There were lots of listings, but no buying. This metric indicated that the marketplace wasn’t working. The MVP was not enough. There was a disconnect between shopping and purchasing. They needed to solve this disconnect to really have an MVP. 

So make sure you capture and analyze the data inherent in your product to see what’s working. At a minimum, get internal testers to use the product. Even better, recruit beta users and ask them what they like. Best practice is to put your product in the hands of beta users and monitor what they do with it. You will definitely find out some bad news – no one feels good about finding out what doesn’t work. But it’s better to get this news early and before it is in the hands of the many early users who can spur on product uptake. Otherwise, you will have a major product debtberg. Not sure about the downsides of not getting external feedback? Just Google “new software failure.” You should find plenty of examples of technical debt from poor product testing.