
As part of our work on the AppPulse Mobile project, we have been digging into hundreds of user comments for many mobile apps, trying to understand what makes an application get a 3-star average user rating.
Interestingly, we could identify two main categories of applications: “Consistent 3” and “Accidental 3”.
“Consistent 3” ratings are all about functionality – users complain about missing features or inconvenient flows in the app. For such applications the majority of user ratings range around three stars, so the application average rating is consistent with most of individual comments.
The “Accidental 3” category of applications is more interesting in a sense. Ratings from most of the users are separated to two distinct groups. Half of the users really like the application; they are excited with the value it brings them, and are happy to give it a 5-star rating. Other users complain about stability and performance issues – application crashes and slow loading times – and rate the app as 1-star or 2-star in the store.
These “Accidental 3” ratings can be easily understood – it is practically impossible to test your application on a whole matrix of device types, OS versions and network connectivity flaws. As a result, some users experience severe quality issues, are unable to benefit from the full value of the application, and express their frustration in low ratings in the app store.
If you want to be on top of how users rate your app, many tools monitor your application ratings on the app store. However, this is not enough. If you really want to understand why users rate your app this way, you need a tool that can track your app’s real user experience and show you where you should improve. This may be exactly the tool you need to gain the missing 2 stars!
Michael Gopshtein is Team Manager in AppPulse Mobile R&D, HP Software.