+2 votes
In terms of cash flow forecasting for ad-monetized apps (games) - we need to understand the effects on our net cash balance, especially when scaling up UA spend.

Is there some useful tool / resource / spreadsheet template for how to do this? How do you do it in your company?
by (180 points)

1 Answer

+1 vote

A lot depends on your situation and what resources you have on hand/available, but I'm going to guess you're asking this because despite having a nice bit of success you're unable to get one/some/all the dedicated resources that typically handle this. (Which is of course, a very common situation in our industry) A template/spreadsheet will indeed be part of the solution, but only a piece. 

First, you need a clear understanding of risk tolerance and growth expectations. If your studio is built for high growth to satisfy return requirements of external investors you should approach this differently than as a bootstrapped, 100% employee owned operation. There's a lot of details in how to manage payback requirements (when you see X% of return on ad spend (RAOS), but for the sake of some simplicity let's assume that's well and settled. 

UA, at scale, is virtually a requirement for any game biz these days so I'm also going to assume your UA spend is a double digit percentage of monthly revenue. So carefully managing this is mission critical. For this reason, I'm going to address more than the tool itself and review the whole process for how to approach this. 

So the first step is having, ideally, a single directly responsible individual who manages this. Due to the huge positive or negative impact this can have on your business, this is more of a process question than a tool question IMO. Ideally it's someone numerically inclined who understands at at least a fundamental level ad monetization, UA, and product metrics. Ideally, they also have existing regular interactions with ad monetization, UA, and the game team so this additional work can be layered on top. 

This person will need to look back at all your historical data and get a clear idea of how ad monetization, UA, and product metrics behave at rather detailed level. A lot can get lost in averages here. A simplified example: ad monetization for days retained drops steadily from Day 1-14, but then from day 14 stabilizes in a way that looks quite strong. This could be great news! Or it could just be a sign you have, say, an English only word game where (some) Tier 1 retention is incredibly good so after Day 14, after non-native speakers have churned, your user make up becomes very heavy with tier 1 users. If you're not well aware of this, you could assume a global (or even ALL tier 1) revenue per days played curve that is incorrect. 

Most likely in any case the person is going to have to ramp up their knowledge in some area. For example, as someone who spent most of their career in product, it rather 'blew my mind' to learn how 'new' ad inventory is more valuable than 'old.'  If I wasn't aware of this- which I certainly wasn't before reading that post- and I was managing UA spend based on the revenue curve of the first users, I would have driven a lot of overspend. 

The same fundamental knowledge needs to be achieved in ad monetization as well. These days, the coronavirus situation unfortunately provides a crux example of this. While installs/engagement seems to be up, ad spending is down resulting in lower ad revenue per user. Brand advertising and many industries have pulled back or stopped spend completely. Plus, even if that spend was 'stable,' this person should be well aware that as your user base grows the quality of users decays. If models expect the 10 millionth user to behave similar to the 100,00th user, they are going to be off. 

Plus, they need to be working closely with the product team so they can be aware of product shifts for better or worse. Did ad revenue jump because of a great live event that can potentially be run every weekend? Or was the event a net negative because it ended up upsetting the economy with a large hangover? 

Of course, this and other risks can be managed by carefully watching cohorts and how actual return by days played compares with historical/modeled return. That daily work should still be done, but since it's a good idea in our industry to assume finding the right person can often take a long, long time, I would recommend this person also spend some time learning about the fundamentals. At least in my experience doing this kind of 'other peoples' work is not uncommon in our industry and oftentimes it's quite interesting and educational if the organization provides proper support and expectations. 

To (finally, I know :-) ) answer your specific question, you should (hopefully) already have a lot of the pieces of this tool on hand. Your product team should already be intimately aware of the revenue & retention curves (revenue per user, per day retained, by territory/channel). Use that historical data to build a model then run a variety of experiments to validate and better inform the model. Once you have something solid it can be used as a tool, but it should very much be a 'live' tool that is regularly checked and updated by this person individually responsible for it. The structure of the tool is relatively simple. There are modeled revenue curves, based on historical data, for platform/channel/territory/etc. where you are buying users which are used to set spend. Then as users are bought and play through the game their performance is watched to ensure it matches the model. When starting new campaigns where there is no historical data (or no recent data) they should be done carefully so you're not making big spends unless you have relevant historical data informing a model to provide guidance. For the sake of this person, I recommend they have a lot of support and reviews with key stakeholders to start, so they know that if something goes wrong (which depending on your situation, will probably happen), leadership understands and expects this to some degree, and is 'right there with them' to help them find a solution. Or at least in times when I suddenly was given responsibility that I wasn't fully qualified this made a huge difference in ensuring the work was a rather fun, super educational challenge instead of an incredibly stressful time. 

by (160 points)
Great answer!