Over the past six months, we’ve been working overtime to improve our Download, Usage and Revenue models. And if we’re being honest, the truth is that the work never really stopped. Our engineering team began working on Carrigain (today’s release) the day after we announced increasing estimate accuracy by 1,200% back in January.
Today, we’re thrilled to share that the Carrigain model updates are now live.
Here’s everything you need to know.
What’s Behind the Improvements
Ultimately there were two key contributors to our success with this model release:
1. More Data
We’ve aggressively invested into collecting more data from app developers and partnering with key SDKs to impact the breadth and depth of our training data set. This expanded data set enables us to significantly improve the accuracy of our estimations.
2. Age, Experience, and Wisdom
We’re approaching our 7th year in business, and most of our data and engineering teams have been with us for at least 5 of those 7 years. We’ve learned a lot over that time period and made a TON of mistakes. We’ve built a lot of overly complex models, and tried using every sexy machine learning algorithm in the books. As with all fine wine and good scotch, we’ve found that pocket of complexity and simplicity which produces the most consistent results.
Unfortunately many of these learnings and approaches to modeling are proprietary and can not be shared here. However, it's important to know that with Apptopia you are investing in an experienced team which is comfortable looking in the mirror and making impactful changes.
Where Our Estimates Improved
1. In-App Purchase (IAP) Revenue
Previous models were dramatically underestimating IAP revenue worldwide. This was the case for all apps, in all countries; however, it might have been more noticeable or felt more extreme at the top of the market due to sheer revenue volumes of the biggest apps.
Our analysis shows that previous models were under predicting by the following percentages:
Top 100 Ranked Apps = 65.86% low
Top 250 Ranked Apps = 58.61% low
Top 500 Ranked Apps = 54.03% low
Top 1500 Ranked Apps = 37.47% low
New models are seeing the following accuracy:
Under 10% margin of error = 65% of apps
Under 20% margin of error = 85% of apps
Under 35% margin of error = 98.3% of apps
2. Engagement and Usage
Previous models were overestimating our Engagement Index. As a refresher Engagement Index = DAU / MAU; the goal of this metric is to help you understand how engaged users are and how frequently active users are returning to the app.
Previous models were producing engagement indexes between 70-90% for the vast majority of apps, which would have meant that users were opening the app almost daily. While this is absolutely the case for some key apps (i.e. Snapchat, Facebook, Spotify, etc), it is not realistic for the vast majority of apps.
After digging into the cause of this we found that our Monthly Active User estimates were actually rather accurate and the issue was over estimating Daily Active Users quite a bit (especially with smaller & mid sized apps).
We’ve adjusted this and are now seeing much more accurate DAU metrics and hence resulting in a more realistic Engagement Index across the board.
3. Emerging Markets
Most app intelligence vendors can estimate performance in consistent, mature markets like the United States without much difficulty. The real challenge is creating predictions in rapidly changing markets such as Brazil or China.
In Carrigain, we’ve been able to significantly improve data quality in emerging international markets. See below for which countries you can expect major gains in accuracy:
Downloads, DAU, & MAU
In-App Purchase Revenue
4. Improved Insight For “Mega Apps”
Mega Apps are apps which beat to their own drum. They include Uber, Facebook, Snapchat, Instagram, Candy Crush, etc. We refer to these apps as Mega Apps as their metrics and performance are such an anomaly that we need to build separate models for them all together. For instance, Facebook’s mobile app retention is so much higher than any other app that there is no way to accurately represent Facebook’s active user base without treating it as its own special snowflake.
One problem which we had previously, was rather straight line performance for these key apps. For instance, Uber has been the #1 Travel App for 358 of the last 365 days (and was #2 for those 7 days). So it’s challenging to model growth and movement for Uber when it’s so consistently #1.
We’ve built a new model for mega apps which allows us to better understand their growth and declines through more closely comparing them to their peers (i.e. other mega apps, not other travel apps in the case of Uber). This has allowed us to better recognize important trends in these apps.
What To Expect Next
We’ve had two additional wins come out of Carrigain which you will benefit from in future releases.
1. Faster iteration speed
We have improved the speed of our model calculations nearly 10x and can now recalculate all 4 years of historical data for all data points in under 48 hours. This is really important because it will allow us to push out small wins and updates to the models much more regularly. This ensures that your data is always fresh, and at the highest level we can provide.
2. More historical data
We know that trend analysis is essential for many of our key customers and with this release we’ve calculated all data going back to January 1st, 2015. The value of this historical data will be available immediately for all of our API & Enterprise customers, and we are working actively to try to make this same benefit available within our web platform.
How We Compare