Blog post

6 Tips for improving video gaming LiveOps campaigns by automating experimentation

6 Tips for improving video gaming LiveOps campaigns by automating experimentation
Written byNathaniel Rounds
Published22 Dec 2022

Like many sectors of the tech industry, video gaming has blurred the line between product and service. As the games as a service model continues to thrive, video game publishers are more reliant than ever on their Live Operations function to drive recurring revenue. Live Operations, or LiveOps is the development and delivery of content in-game without a new release or new code. LiveOps plays a similar role in video gaming as engagement or retention marketing in other industries, and has long been the domain of analysts and data scientists. But new techniques in machine learning (ML) are changing the game.

LiveOps teams have always been experimenters, constantly trying new bundles, new offers, new promotions, and new content. Machine learning has helped them with sophisticated predictive models. For example, instead of building segments by hand, data-savvy LiveOps teams now use unsupervised learning, a type of ML, to construct segments from patterns in player data.

To move beyond segments and find the best offer for each individual player, LiveOps can automate experimentation. OfferFit’s automated experimentation platform plugs into a LiveOps tech stack as a experimentation and decisioning layer.

Frame+3ttt+(1) 1

Let’s look at some tips for success implementing automated experimentation in LiveOps.

Put together a cross-functional team

Any new data initiative requires a cross-functional team to be successful. To succeed with automating experimentation, LiveOps teams should consider the stakeholders they need in areas such as LiveOps, data science, and IT. A project is also more likely to be successful with executive sponsorship. 

Choose a success metric clearly tied to revenue

Reinforcement learning models, such as the ones that power OfferFit’s automated experimentation platform, rely on a chosen success metric. The AI “learns” by seeing which actions achieve success with which players. LiveOps teams can choose any success metric they like – engagement, frequency of play, or indeed anything that can be measured from player data. But a savvy team will choose a success metric clearly tied to revenue, such as

  • Free-to-paid conversion rate,

  • Average revenue per player per month, or

  • Offer acceptance rate, adjusted for the value of the offer.

The reasons are both political and common sense. Of course it’s best to ensure that a new machine learning project is helping the business by driving revenue. But any investment, including an investment of time and energy into automating experimenting, will need commitment from diverse stakeholders to be successful. It is typically easier to get buy-in for a project when the bottom-line impact is clear.

Exploit your rich player data

Video game companies have a structural advantage in working with data compared to companies in most other industries, simply because the data about how customers use the product is so rich. LiveOps teams should take care to capture and use that data. Reinforcement learning is powerful because of its capacity to experiment and learn quickly. But the ability to personalize at the level of individual players depends on the fine details of player behavior that is captured in data.

Allow for a variety of offers and trigger points

Given a rich trove of player data, a reinforcement learning model needs an array of options in order to experiment at the level of individual players. Be open to trying a variety of different types of offer, like exclusive bundles, coupons, premium content, and regional offers. Experiment with different trigger points, based on the in-game situation and on the player’s journey. For example

  • Milestones –  triggers that occur when a player reaches a certain point in the game, number of hours, or number of battles, and 

  • Trophies – triggers that occur when a player unlocks some achievement, such as winning or losing a certain number of battles in a row.

Vary timing and cadence

In-game trigger points and the specifics of an offer are not the only dimensions along which to experiment. Consider also timingand cadence. Some players might be more open to spending while playing in the evening compared to on their lunch break, while other players might be the opposite. And after a player has accepted or rejected an offer, how long is it best to wait before sending the next one? The only way to know for sure is to discover the answer empirically, so an automated experimentation platform needs the capacity to vary cadence.

Be sure you can close the loop

Suppose an AI model recommends that a particular player be given an offer when they hit some trigger – a bundle the player can purchase when they win 5 battles in a row, for example. For a reinforcement learning model to measure the effectiveness of of this recommendation, the AI needs to know:

  • Whether the player hit the trigger,

  • Whether the offer was made,

  • Whether the player accepted, and

  • Finer data, such as if the player clicked on the offer but then didn’t make the purchase.

This may seem like common sense, but companies sometimes fall into the trap of either not capturing relevant data in the first place or not making that data readily available to the LiveOps team that needs to use it.

Experimentation unleashed

Logo

Ready to make the leap from A/B to AI?