Previously, in An Algorithmic Approach to User Acquisition Automation, we explored the mathematical foundation of programmatic campaign management on Facebook and Google. In some cases, automatic campaign management is not possible for lack of a management API or granular performance reporting. In this case, the same mathematics and approach can be used to steer channel-level budgets based on higher granularity ROAS (i.e., network or geo) and with manual management. This case study describes results from 6 months of channel-level budget allocation encompassing tens of millions of dollars of user acquisition spend. It excludes Facebook and Google campaigns and focuses only on ad dollars managed by human operators with weekly guidance from our Intelligent Budget (IB) product.

Intelligent Budget

We beta released AlgoLift IB in early 2019. A high-level system flowchart is given in Figure 1. The goal of the product is to deliver an optimal cross-channel allocation of user acquisition budget based on top-line monthly budgets set by the client. Leveraging our LTV forecasting models and market response models, we allocate monthly budget across all channels with the optimal allocation based on the clients long-term pROAS (predictive return on ad spend) goal (e.g. D180 payback or max D365 ROAS). Inputs from clients are limited to a monthly budget, a portfolio performance target and a list of channels eligible to receive the month’s budget. Channel level budgets for Facebook and Google were set by IB, but campaigns were managed programmatically by our Intelligent Automation (IA) product.

Figure 1: Flow chart identifying the roles and responsibilities of the client, third-party data vendors and AlgoLift. AlgoLift’s is API connected and can programmatically manage Facebook, Google, and Apple Search Ads, Unity Ads, Vungle, and AppLovin. The IA and BI products can work in parallel or independently to most optimally distributed channel and campaign-level spend among all possible spend sources.

Case Study

AlgoLift engaged with our clients to leverage Intelligent Budget in mid-2019. In this case study, we take one example where IB performed cross-channel optimization on 8 mobile apps spending over 48 channels. Platform/app/channel level budgets were allocated weekly for the remaining channels and delivered to UA managers who manually executed spend within the various channels. Discrepancies in spend versus budget due to manual errors or update delays were automatically accounted for by (i) increasing/reducing spend on Facebook and Google and (ii) rebalancing misallocation into the following week. Monthly budgets were delivered at the start of the month by the client at the app-platform level. AlgoLift revised monthly budget allocations on average once a week in response to (i) clients’ internal business factors or (ii) shifts in market dynamics / fill rate.

Figure 2: Day 90 ROAS, CPI, and Organic LTV over tIme. Monthly averages are shown before (negative install months) and after (positive install months) IB started. Month 8 shows the onset of the 2020 COVID-19 pandemic which resulted in high Day 90 ROAS. Data from this period was removed so as not to bias our results.

Figure 2 shows monthly day 90 ROAS, CPI, and Organic D90 ARPI for the period before Intelligent Budgeting (negative months) and after (positive months) for 8 apps over tens of millions of dollars of user acquisition spend. Note that this is a retrospective analysis and allocations were made based on pROAS. Spend from retargeting campaigns or unattributable sources was removed. Pre- and post- optimization averages were taken 6 months before and after IB was implemented as indicated by the dotted line. Data before and after the experiment range are included for transparency. The ranges were chosen to so that they:

  1. had the same number of sample months in both periods
  2. included the winter holidays
  3. excluded the effects of the 2020 COVID-19 pandemic which heavily skewed results in favor of IB
Table 1: Tabular summary of the pre- and post- optimization periods depicted in Figure 2.

Pre- and post- optimization statistics are summarized in Table 1. We tracked CPI as an indicator of overall market price. CPI can be affected by total spend, market competitiveness, and age of app. Table 1 reflects the expected trend of CPI increase over time. CPI was 17.4% higher during the IB period.

We also tracked Organic ARPI which indicates the non-UA driven value of users. Paid ARPI can be greatly affected by targeting, campaign type, and market dynamics. Organic ARPI should be primarily affected by changes in the product. Table 1 indicates that Organic ARPI dropped by 9.6%, reflecting an expected trend that mobile games typically slow down in monetization over time.


Average day 90 ROAS increased from 26.93% to 29.38% in the post- optimization period. This corresponds to a 9.1% increase in return on ad spend efficiency. It should be noted that this was during a period when the market CPI became 17.4% more competitive and product monetization reduced by 9.6%.

Future work

We achieved the above results with an approach that optimized ROAS by manipulating channel allocations, under the assumption that each channel had an independent market response. With the launch of our Organic Lift product, we can now include estimated organic revenue from our Organic Lift media mix models and optimize the sum of estimated paid + organic pROAS. We are interested to see what incremental lift can be gained by using this approach versus considering strictly attributed ROAS. Furthermore, we aim to leverage more sophisticated media mix optimization, with components beyond organic lift. We see the potential for models which can account for covariation between channel responses, including “cannibalization” and other secondary effects that marketing on one channel can have on other channels’ effectiveness.