We use cookies. You have options. Cookies help us keep the site running smoothly and inform some of our advertising, but if you’d like to make adjustments, you can visit our Cookie Notice page for more information.
We’d like to use cookies on your device. Cookies help us keep the site running smoothly and inform some of our advertising, but how we use them is entirely up to you. Accept our recommended settings or customise them to your wishes.

How to Monitor and Measure Your Adaptive Learning Program

Adaptive Learning is a core capability of a decisioning platform that automates training, monitoring, and fine-tuning classification models to predict a specific customer behavior. It’s used to drive the prioritization and arbitration of the next-best-action (NBA). An example of prediction is the likelihood to respond to an offer that customers with similar profiles have accepted or declined on previous interactions. 

Adaptive learning is ideal for when the volume of models to be created and maintained makes it impractical to use traditional off-line modeling techniques, so that a trade-off between accuracy and operationalization is necessary. There is a general misconception about this functionality, however, that minimal knowledge of predictive analytics or data science is required to enable a successful predictive learning plan.  The reality is that adaptive learning requires domain expertise to set up, run, and maintain it. A world-class adaptive learning program requires a monitoring and measurement solution to respond to four fundamental questions:

  • Are the adaptive learning models activated?
  • How are the models performing?
  • What predictors and features are driving the responses?
  • What insights and learnings is the application generating?


The adaptive learning process has two phases: exploration and exploitation. During the exploration phase, the models ‘learn’ in real-time from all interactions, but they are not activated until preset levels of confidence and accuracy are met.  After the thresholds are met, the exploitation stage enables the models to score the inbound NBAs. In a perfect scenario, adaptive learning is activated for all NBAs, but not all models always reach a trained state. An offline investigation of the cause should follow up: are the NBAs or their content rich enough, are the NBAs not getting enough exposure, are the business rules skewing the results, etc. 


It is essential to keep track of model performance in both exploration and exploitation phases. Key metrics are accuracy, success rate, volume, and frequency.  Because the models are trained continuously, a swing in strategy, market trends or channel operations can change the performance of the models overnight.  The models can be evaluated using a volume-acceptance rate matrix to determine next steps. For example, a high-volume/low-response rate means that the NBA has low customer engagement. Adversely, a high-volume/high- response rate translates into a top performing model with enough evidence to be statically significant.

Model performance chart

Fine Tuning

During the initial setup of the adaptive learning capability, a few assumptions are made about the predictors and features selected. During the online training phase, predictors that have no impact on the model’s predictive power are automatically pruned.  Best practices in an adaptive learning implementation require a plan to rebuild the models by eliminating ineffective predictors and adding new ones until a stable state is reached after a few cycles.

Actionable Insights

An adaptive learning measurement framework should also include the capability to generate insights that can be used to optimize the channel, customer, NBA and contact strategies. It is vital to have a process in place to allow a feedback loop between the adaptive learning engine and the business and analytical users.

In summary, adaptive learning is not a ‘deploy-and-forget’ capability and it requires ongoing measurement and maintenance, so its benefits translate into shifting the analytical tasks from data wrangling and preparation to true insights activation for a healthy decisioning program.