Blog Post

Customer decisioning: How model monitoring can keep you on top of the changing data landscape

By Samuil Vutov, 04/09/2025


Understanding how your customers think and behave over time is key to delivering relevant experiences, especially when it comes to predicting their next action. But staying on top of shifting preferences and data patterns isn’t easy. That’s where model monitoring comes in.

In this blog, we explore how model monitoring can help data science and decisioning teams stay one step ahead: keeping models accurate, adaptive, and aligned with real-world change. If you’re just getting started with data science in customer decisioning, you might want to check out our previous post for a quick primer: How Data Science Applies to Customer Decisioning.

Now, let’s dig into why ongoing model monitoring matters—and what good looks like in practice.

What is model monitoring?

Model monitoring is a set of processes that consistently monitor the performance of machine learning models. This can cover the performance of individual predictors, monitoring for data and concept drift etc. The key thing to note is the “consistent” part of the definition. This is not a one-off measure where we tick a box and call it a day. It’s a process that requires checks in place to make sure performance targets are achieved.

The key is to be predictive and not prescriptive i.e. to notice if the model is still doing its job and nothing has changed too much since it was created, rather than come in when the damage is done. It’s ironic, actually - you try to predict if your predictive model is performing as expected. But here’s a question – if the work was done properly in the first place, shouldn’t the model be good enough as it is? Why spend time monitoring later, when you can focus more effort on building a robust model from the get-go? This leads us to our next section:

Why monitor models?

To put it simply, because model performance is not set in stone and models do degrade with time. It can be easy to forget sometimes, but a model is just like any resource in a company. It too requires maintenance and regular checks to make sure it performs up to standard. This is the reason why the concept of a model life cycle exists, to highlight that this is an ongoing process and is some cases, it’s iterative. In such cases, when a model’s performance drops and needs to be retrained, we go back to the beginning of collecting data, training and then deploying. 

To put it in more detail, there are a few main reasons why we want to monitor models: 

  • Data drift – Observed when the statistical properties and distribution of data change over time. Let’s consider a simple case, where we’re trying to predict whether a customer will buy a product based solely on their income level. Our model works well but with time, the distribution of people’s income in our dataset changes (this can happen suddenly or gradually over time). Now, the underlying relationship between income and purchase intent might still be relevant, however, the model’s ability to predict correctly will be impacted, since the model is predicting based on old data.
  • Concept drift – This means that there is a change in the established relationship between the input data and the target variable. Let’s take the above example, where income is used to determine purchase intention. With time, customer preferences change and not only someone’s income but also a change in their lifestyle will inform a purchase. In such a case, the model’s prediction will be inaccurate and will need attention.
  • Predictor performance dropping – Sometimes, the issues are granular and not as obvious. It can be individual model predictors dropping in importance. This can be for a variety of reasons:
    • The predictor is no longer important in predicting behaviour. Such a case can be dealt with by retraining the model. This change can appear gradually over time or quickly if there was a more abrupt change in behaviour.
    • The data for the predictor is not accurate or is missing (e.g. it’s corrupted, there were issues in the underlying data process, or the predictor is no longer available from a third-party supplier). Should this be the case, then the performance will drop significantly and, seemingly, out of nowhere.

Why monitor models can simply be answered with “because prediction accuracy is not static, and data changes can have an impact on it”. With that in mind, what can be done in terms of establishing model monitoring? 

What does effective model monitoring look like?

Any effective monitoring system should focus on measuring model performance over a model’s life cycle and demonstrate if performance is dipping in accuracy or is maintained. On a granular level, the top predictors need to be known and observed whether they still maintain their importance over time. 

Effective monitoring is, at its core, related to an understanding of the data and knowledge of when performance changes may be down to natural drift or data quality. This is key, since whatever monitoring approach is chosen, the person who observes the results in the end has to know what action to take. Fully automated monitoring is possible but only in specialised cases and rarely does it allow scaling for all models within a decisioning project.

An understandable question is, if we have model issues, why don’t we retrain? While retraining is useful and can help address model drift, sometimes it might not be necessary or other times, it may be costly. If your change in performance is due to data quality, retraining won’t help if the data is corrupted or the way it was collected changes. Then data transformation is more feasible. Retraining is also computationally intensive i.e. it can take time and resources to retrain and doing that consistently, especially with large datasets, can be costly. This is why monitoring is essential.

Solutions to the above issues can be implemented in a number of ways ranging from created from scratch to out of the box:

  • Created from scratch – This approach is the most customisable, but most time consuming. It’s manually implemented, which can address specific business requirements but requires time, expertise in coding and a certain amount of trial and error before it can be useful.
  • Out-of-the-box – Limited in terms of customisable elements but most time efficient. The majority of these measures are geared towards models consistently being retrained. Some initial set up is necessary in the beginning and afterwards, the process runs automatically and only requires a data scientist if changes/updates are required. In our experience, best in class platforms allow a spectre of options to make sure all key aspects of monitoring are handled.

How to do monitoring out-of-the-box?

Having a way to monitor a model, or a hub that contains all the above listed points, will leverage better data practices and increase awareness when model changes are required.

Out-of-the-box also offers a few pre-built solutions:

  • Model governance with a UI – This is about best practices and rules to ensure that there is a commitment to produce the best outcome. A UI allows you to view model specific attributes without having to run code. This is helpful when reviewing individual predictors and observing which ones are grouped to handle correlated predictors. You can also set a threshold for predictor importance, below which a predictor won’t be used in the prediction. This allows for  automatic enabling or disabling of  predictors  without  having to manually check them.
  • Performance visualisation – Model performance can be visualised and compared between models. This is helpful to put into perspective a model’s performance based on key metrics – for example, model score and number of observations. Visualisation is key when presenting and quickly getting an idea of performance over time.
  • Dev and Prod model comparison – In a customer decisioning situation, a new model may be developed by a data scientist to replace an existing one that’s already in production. Being able to monitor the performance of both with production data can be seamlessly done and the better performing model can be promoted to production.
  • Alert system – An alert system can be set up with triggers, which sends a message when a certain event (e.g. model performance drops, a prediction couldn’t be generated etc.) takes place or to keep you updated without having to frequently check-in on the model. Everyone’s time is valuable and having to constantly be involved with the monitoring process is inefficient.

Setting up takes a bit of time but it’s worth it, since automation saves time that would otherwise be taken up with manual checks. 

Embracing the power of predictive models in a customer decisioning environment allows a company to automate NBA (next best action) decisions and quickly respond to customer needs. However, like any other resource in a company, unless you maintain oversight of its upkeep, you lose the advantage it delivers. Therefore, it’s vital to monitor models during their entire lifecycle and respond appropriately if the models exhibit signs of their performance dropping off. There are different ways in which performance can be impacted from small changes that impact individual predictors, to large changes that impact the underlying relationship between inputs and model predictors. An experienced data scientist must know how to address the changes and having a manual solution, or an out-of-the-box platform, facilitates the monitoring experience, ensuring the customer decisioning platform continues to drive maximum benefit to both customers and the business.

Why choose Merkle?

At Merkle, we understand that decisioning and data science discussions can be difficult. Combining both can be a further complication but at Merkle, we’re here to provide support and guidance along the way. Our customer decisioning practice has experience delivering precise and personalised interactions that focus on establishing customer-centric excellence. Our data science team is equipped with the knowledge on modelling techniques, advance analytics and AI solutions that aim to deliver timely and actionable insights. 

With Merkle's help, businesses can deliver exceptional customer engagement in this competitive landscape, fostering enduring relationships. If you’re considering a complete transformation of your customer experience, get in touch with Merkle today to drive real, valuable results.

Want to read more? If you missed our last Decisioning Congress, you can access all the valuable content on-demand from across the two days. Explore the five stages of the Decisioning Life Cycle and gain actionable insights to become a world-class decisioning business.

You might also like: