So your forecasts are higher than the actuals…but is over-forecasting the real problem?
This article describes why it is dangerous to rely on high level measures to diagnose the nature of bias problems.
I have never yet come across a forecast process where forecasts are consistently too low – the usual problem is over-forecasting. And in these circumstances it is natural that managers start scouring their portfolio for the ‘culprits’: those forecast that are consistently too high?
But are they looking in the wrong place? My experience suggests that sometimes they might be.
When we implement ForecastQT, we are typically asked to track forecasts over more than one lag; say one month out, because it drives the production process and three months out, because the buyers use this longer lag forecast to plan the supply of materials. Where statistical forecasts are judgementally adjusted, it is common to find over-forecasting in the short term forecast but not the longer lag. The apparent source of this bias is usually not hard to find. Additional volume is added into the short term forecast based on market intelligence from the sales organisation and the usual blame game kicks off - fuelled by the usual lazy stereotype of over-optimistic sales behaviour.
But this situation is, at best, often an oversimplification. At worst, it can drive behaviour that will make things worse. Let me explain.
When we are looking at a portfolio of products we have to be mindful that the bias we measure at an aggregate level is the result of over-forecasting of some items being offset by under-forecasting other items and that our focus should be on improving the quality of forecasting at these lower levels; your customer – a shopper - wants to buy pears or potatoes not ‘fruit and vegetables’.
In ForecastQT we track the bias in these low level items as well as the aggregate bias and we often find something that looks like this:
3-month lag 1-month lag
Aggregate Bias (net of over- and under-forecasts) 0% 5%
Contribution made by over-forecast items 15% 15%
Contribution made by under-forecast items 15% 10%
Looked at in this way we get a different picture. Now it is clear that the judgemental adjustments made between month 3 and month 1 haven’t created the problem of over-forecasting – they have reduced the problem of under-forecasting! In fact, the level of bias at a low level has reduced: from 30% (15%+15%) to 25% (15%+10%). The real problem is not that low level forecasts have been increased when they should not have been; it is that some forecasts have not been reduced when they should have been…which is always more difficult to do, because if the forecaster gets it wrong, customer service may suffer and another kind of blame game kicks off.
What this highlights is the importance of ensuring that you are measuring what matters – the quality of low level forecasts – and the damage caused by indulging in blame rather than looking for cause.
Post a comment