Go to the homepage

CATCHBLOG

Why bother with forecasting? From error and ‘accuracy’ to adding value

As far as I know we are not legally required to forecast.

So why do we do it?

My sense is that forecasting practitioners rarely stop to ask themselves this question. This might be because they are so focussed on techniques and processes. In practice, unfortunately, often forecasting is such a heavily politicised process, with blame for ‘failure’ being liberally spread around, that forecasters become defensive and focus on avoiding ‘being wrong’ rather than thinking about how they can maximise their contribution to the business.

This is a pity, because asking fundamental question ‘how does what I do add value to the business’ could help forecasters escape the confines of geek ghetto and the dynamics of the blame game and reposition the profession as important business partners.

So why do we forecast? Let’s answer this question by considering the alternative.

If we didn’t forecast what would we do? The obvious answer is that we would react. So, if we had just sold x units we would ask our supplier for x units to replenish our stock, which in order to provide good customer service we would maintain at a level sufficient to deal with the volatility of demand until the replenishment order turns up.

Pretty simple stuff, eh? Simple, but not stupid, though, because this is exactly the philosophy that underpins Kanban, and Toyota seem to have done pretty well over the last couple of decades.

So why do we forecast rather than simply replenishing our stock based on prior demand?

The answer is that by forecasting demand we are anticipate demand volatility and so do not have to hold as much buffer stock to provide the same level of customer service. In technical terms, safety stock levels are based on the volatility (as measured by the standard deviation) of forecast error which is lower than the volatility (standard deviation) of demand…at least that is the aim.

So the reason we forecast is that it allows the business to hold lower inventory levels…and this is how forecasters add value. And wouldn’t it be great to transform the forecast process from sterile discussions about MAPE levels and inquisitions to discover who has ‘caused’ the errors to one where we could talk about how to add even more value?

So how do we do this?

The key is to compare actual forecast error to the simple naïve forecast error. A simple naïve forecast is where we use the last set of actuals to produce the forecast for the subsequent period, which if you think about it mimics a simple replenishment strategy. So if our actual forecast error is lower than the naïve forecast error then we have added value. Easy isn’t it? And it gets better.

Because, as a general rule, more volatile data series are more difficult to forecast than stable data series this approach automatically allows for forecastability – the difficult or ease of forecasting. So all of those pointless arguments about whether comparisons are fair or not disappear. In fact, our research has shown that it is possible to use this approach to calculate the limit of forecastability, so we can, for the first time objectively measure the quality of forecasting. So instead of wasting our emotional energy on attributing blame we can celebrate our successes and work out how to get even better.

This is not completely new. For a number of years Mike Gilliland of SAS has been advocating that forecasters measure forecast error at different steps in the forecast process (for example before and after market intelligence is added to statistical forecasts) to determine where value is added and where it is destroyed. This approach supercharges Mike’s value added methodology, allowing us to analyse the value added by the entire process, not just process steps, make objective judgements about the quality of forecasting and facilitate benchmarking in a way not possible before.

Sounds great, doesn’t it, but a few words of caution are in order before you dive headlong into this pool.

First, to be meaningful, value added has to be measured at a very granular level – at the level at which replenishment orders are generated; perhaps at warehouse/SKU level. Also, we need to calculate averages as we cannot look at individual periods in isolation, on the absolute amount of value added in order to focus attention on the items that matter. In addition we would ideally be able to make a distinction between systematic error (bias) and unsystematic error (variation) as they often have different causes and so require different corrective action.

All of this likely to take using value added measures beyond what can be accomplished on a spreadsheet, particularly as we will want to drill up and down product hierarchies to track down the source of problems.

Finally, be prepared for a shock. Our research shows that it is not as easy to beat a simple naïve forecast on average as you might think. And at the most granular level typically up to 50% of forecasts destroy value. On the positive side, this demonstrates how easy to should be to increase the value of the forecast process.

So there will perhaps be some cost and perhaps some false pride to be swallowed but I would argue that this is a small price to pay for repositioning forecasting from back office whipping boys to respected professionals making a measurable contribution to the value of the business.

< Back to Blog

Discuss this post

Comments

 
No comments.
 

Post a comment


You can use the following HTML tags: <a><br><strong><b><em><i><blockquote><pre><code><ul><ol><li><del>


CAPTCHA Image
Reload Image