Over the past few weeks, we have presented and built a probabilistic attribution model that will allow us to continue optimizing marketing spend for maximum long-term return on ad spend (ROAS) once iOS 14 effectively deprecates the IDFA. Today, we are excited to share some preliminary validation results.

AlgoLift Attribution model recap

We use three data sets to probabilistically attribute installs:

  1. Anonymous user-level in-app data: customer-generated user ID, revenue data, in-app events
  2. SKAdnetwork data: campaign ID, source-app-id, ConversionValue
  3. Deterministic attribution from MMPs: opt-in iOS14 users and users identified by other means within Apple’s terms of service

More details on our approach to the probabilistic attribution model are here, and our usage of ConversionValue is here.

To be clear, we are not using IP addresses in our attribution model, nor are we assigning users to campaigns 1:1.

We tested the power of the AlgoLift probabilistic attribution model based on its two intended uses: campaign revenue projection and optimal budget allocation.

Campaign revenue projection

To evaluate the effectiveness of probabilistic campaign revenue projection, we can calculate the resulting error rate of campaign/install date-cohort revenue projections with the probabilistic attribution model, as compared to the “ground truth” offered by the deterministic attribution model provided by mobile measurement partners (MMPs) today.

Of course, there will be some variance in the campaign-cohort attributed revenue error rate, and so it makes sense to define a key performance indicator (KPI) that encompasses the average performance of the attribution model across campaigns. There are many reasonable choices for this; one possible choice and the one we use here is the weighted mean absolute percentage error (WMAPE), which weights percentage error in the attributed projected revenue by install count.

The below table shows the resulting WMAPE summarized by app monetization type. Results are shown for two levels of granularity: campaign/install date cohorts and more aggregated channel/install week cohorts.

Weighted mean errors for attributed projected revenue at two cohorts of granularity

At the channel/week cohort, which represents a typical minimum cohort size for determining optimized channel allocations, the probabilistic attribution model was very accurate in attributing projected revenue per channel. This is critical to enabling us to continue providing optimized channel allocations via our Intelligent Budget tool.

Below is an example of the d365 predicted LTV by channel for a mobile app. The graph compares predicted LTV’s using the probabilistic attribution model to those leveraging deterministic attribution. It’s extremely encouraging that the difference in pLTV across channels is consistently small, and that the probabilistic method was able to attribute projected revenue across channels with a high level of accuracy.

Optimal budget allocation

The next question to answer is how using our probabilistic attribution model affects algorithmic budget allocations across channels and campaigns.

To answer this question, we used the following procedure:

  1. Optimize budgets using existing deterministic attribution to output optimized budget allocations across channels.
  2. For the same goals and constraints, optimize budgets via the probabilistic method instead.
  3. Measure the “optimality loss” of budgets associated with using probabilistic instead of deterministic inputs. This can be measured as a simple percentage reduction in objective function value.

This suggests that moving to a probabilistic model will not significantly reduce the efficacy of automated budgeting decisions made by our algorithms. This methodology can also be extended to any granularity that SKAdnetwork supports, including publisher app (sub-publisher) granularity.

Note: This test assumed a worst-case scenario where no users share their IDFA. This means we assumed we had no access to deterministic attribution from MMPs.

Next steps

In this case study, we have demonstrated a minimal error in campaign valuation and budgeting decisions using our probabilistic attribution. We have shown that we have built a working, proven solution that will be highly effective in the iOS14 paradigm.

Over the next few weeks, we expect to improve on our 8% error rate between the probabilistic and deterministic budget outputs by:

  1. Incorporating deterministic attribution data into our probabilistic attribution model. It’s unclear how many users will share their IDFA, and for the purposes of this case study, we didn’t include any deterministic attribution. Including this data will only reduce the error rate.
  2. Optimizing ad network campaign setup. We expect to see improvements in the error rate with granular campaign setup by reducing how much campaign targeting overlaps.