With voice interaction still a relatively new field, benchmarking your voice skill can be hard; what can you expect?
Since the launch of the Amazon Echo is 2014, voice assistants like Alexa and Google Assistant have grown rapidly in popularity, and now thousands of brands are exploring these new platforms seeking to augment existing sales and service channels, and open up new lines of communication with their customers.
But what should you expect from your skill? How will you judge its success? Amazon and Google’s analytics packages will tell you how many users your skill has, but with so few official usage statistics it can be difficult to understand what these numbers really mean.
In early 2018 we worked with Channel 4 to launch the Human Test, a Turing-style test to promote season 3 of the critically acclaimed drama Humans by quizzing the show’s core fans to find out if they were human or ‘synth’.
The number of users to expect will depend on a number of factors; how many customers you have, how many of those customers have smart speakers, your promotional strategy, and the utility of your skill. A useful or entertaining skill backed by solid promotional activity will gain a lot of users, while skills tackling a niche user problem and with no promotion will see only a trickle of users.
For the Human Test we had over 50,000 users in the UK in the first week of the skill being live, with Alexa accounting for around 75% of all users, and Google Assistant accounting for the remaining 25%, roughly consistent with the two platforms’ reported market share. It’s important to note this is on the higher end of usage for a skill and was backed by nationwide tv, radio, and social media advertising. If you don’t have the budget for major promotional activity, you can still ensure your skill is well crafted and shareable. Both Amazon and Google lack a transparent daily or weekly ranking system for ‘top skills’ like app stores, and overall discoverability is one of the weak points of both ecosystems, but this can be overcome by ensuring your skill is well-crafted enough to generate a buzz.
Retention is another key statistic to track. While voice skills initially had a typical one-week retention rate of 3%, this has recently risen to 6% as skill builders improve the quality of their output and consumers become more familiar with voice interaction. We achieved a retention rate of 10% for the Human Test; above the average, but still below what we would typically expect for mobile apps. There are a lot of reasons why voice experiences achieve lower retention rates than other platforms, but it should be no excuse for not paying adequate attention to the user experience – there are plenty of ways to keep users coming back to your skill.
Ratings are also important, and thankfully easier to grasp. Amazon consider ratings very important in how they select skills for promotion and implicit invocation, so pay attention to the reviews you are receiving. Like retention, ratings on voice experiences again tend to be lower than those for mobile apps as the market is still developing and customer expectations are very high. It’s also worth noting that while Alexa will typically provide the majority of your users, Google Assistant users are more likely to review a voice experience. Despite Alexa accounting for around 75% of all users for the Human Test, the project had twice as many reviews on Google as it did on Amazon.
Amazon and Google are both very secretive about their respective platforms, with few official statistics around sales, usage, retention, or revenue and the majority of information coming from third party research. While we expect the skill stores to develop and eventually add in ranking systems akin to mobile app stores, in the meantime, you will just have to rely on information from third parties, reviews from users and your own internal metrics.
If you are looking for a partner to develop or improve your own voice experiences for Alexa or Google Assistant, get in touch, we’d love to talk.