I want to follow up on my statement on data integrity in response to the SEC charges of securities fraud against a competitor of ours. As I stated before, we are proud that we can put forward high quality and accurate data by taking a principled and transparent approach. Everyone at Apptopia has always been committed to sourcing and modeling our data in an ethical, responsible and sustainable way.
I was recently put in contact with Integrity Research, an advisory firm for the investment research industry, to discuss what we’re doing at Apptopia from a compliance standpoint. You can read the article that came from that here.
I believe we have the most comprehensive compliance program in our industry, and in true Apptopia fashion, we’d love to publicly lay it out for all to see. Here are the measures we’ve put in place to ensure our customers, our data and Apptopia are all operating above board at all times:
What procedures does Apptopia use to protect PII (personally identifiable information) in the data you collect?
We do not come across PII information in any of our data. It is not how we build our models. Data we collect is aggregated at the app level and not the user level. The data shared with us from our App Developer Partners contains no PII whatsoever. However, when we use App Developer Partner data to train our algorithms it is both anonymous and aggregated. In this instance, we are anonymizing the specific app, so instead of training the model based on data from “JetBlue” (as a made up example) we are training it on “Data on an App in the Travel Category”, with a specific rank, and country value. Our competitors collect data in other ways in which they do come across PII and the burden is on them to explain how they protect/hide this information.
Do companies submitting proprietary information about their apps to Apptopia require that the data be anonymized? If so, how do you ensure that those commitments are met?
We commit to it being anonymized and stored in an aggregated manner. Data is anonymized at the country / category / store level. Our models do not know which specific app(s) are being used to train them. For example, if we were using data from the JetBlue app, the model would only know it is a top ranked travel app in x country. The data our app developer partners are sharing with us, is their data on their own app. This data is shared with us via User Roles & Privileges within the Apple App Store and Google Play Store which were designed for this exact purpose (read only access to analytics data).
In addition, users are explicitly giving us the rights to use this data, in the way we are. Above and beyond all of this, we are reviewing Terms of Service from the app stores on a quarterly basis to ensure compliance. As a result, we will feel very comfortable providing these representations. None of the data we collect, or our Data Partners (app developers) collect in this instance are at the user, consumer or device level. All of the data is aggregated at the app / day / country level. This data is collected on behalf of the App Developers by Apple and Google, and is part of the agreements & consent app store users agree to when their phone is first turned on.
What are the key processes you have in place to protect against SEC enforcement actions?
We have gone above and beyond our risk assessment and what we’ve seen others in our industry do:
- Our entire board is included in our compliance process
We have many different investors represented on our board. And while our board very much wants Apptopia to be successful (and would profit from this); one major unifying factor for all of our investors is that Apptopia is just one small piece of their fund. And in the event we were doing anything unethical or which put the company at risk - their allegiances would be to their fund / LPs and not to Apptopia. As a result of this we’ve created an Audit and Compliance Committee which is made up of three board members (all different firms and one ex public company CFO). This committee will not only help regularly review our standard policies but will be leveraged greatly in some of the protocols outlined below.
- Whistleblower Hotline
We’ve engaged Navex Global to set up an international whistleblower hotline. This allows our team to anonymously submit complaints of concerns of any kind via phone, email, or web. These anonymous complaints are shared with HR, the CCO, and other people on the leadership team. We’ve taken this one step further to include our entire Audit and Compliance Committee in the Whistleblower process. This way we have checks and balances to ensure nobody on the leadership team can minimize complaints being submitted.
- Model Change “Red Alerts”
In our line of work, in order for MNPI (i.e. app developers Real App Analytics Data) to ever be used incorrectly there would need to be changes made to the core modeling / estimation codebase. Our new “Red Alert” is meant to further reduce the likelihood of senior bad actors in the company. In the event that any changes are made to our Estimation Algorithm codebase (which could include planned changes), an alert will be sent to our CCO, CFO, HR department, and our Audit & Compliance Committee (3 separate board members).
- Very Negligible Volume of Public Company Data
We are regularly (at least annually) having a risk assessment done by a global law firm to understand our business risks (including compliance risk). We’ve been advised that we have an extremely low risk profile because the amount of data from Public Companies used to train our models is very small. As of Oct. 1st, 2021 we found that ~0.002% of our training data is associated with public companies. We’ve been advised this is negligible and reduces our overall risk dramatically.
It took us a long time to build our data and build our company the way in which we did. It was a painful, slow and expensive process. It was not like many of the fast growth stories you hear from SaaS companies out of Silicon Valley. But we wanted to build a sustainable and trustworthy business. Our way has worked and it’s encouraging/validating to know you can produce a data product that is a leading indicator for public companies in an above board manner.
Hopefully we will see greater transparency in the industry and better questions being asked by all those involved. Some hedge funds are smaller than others and do not always know every question to ask. We make sure to proactively send our full DDQ to every company we may do business with so they know everything regardless of the comprehensiveness of their pre-planned questions.
We believe in being the partner you can trust, one that can grow with you. And that means never doing anything to put you at risk.