Go to the homepage

CATCHBLOG

Signals and noise. Tackling forecast bias: Part 1.

The average level of MAPE for your forecast is 25%.

So what?

Is it good or bad? Difficult to say.

If it is bad, what should you do? Improve…obviously. But how?

 The problem with simple measures of forecast accuracy is that it is sometimes difficult to work out what they mean and even trickier to work out what you need to do.

 Bias on the other hand is a much easier thing to grasp. 

Systematic under- or over-forecasting is straightforward to measure – it is simply the average of the errors, including the sign, and is clearly a ‘bad thing’. Whether you are using your forecasts to place orders on suppliers or using them to steer a business, everyone understands that a biased forecast is bad news. Also it is relatively easy to fix; find out what is causing you to consistently over or under estimate and stop doing it!

 That is the theory, and it is right up to a point. In reality however, what seems like a straightforward matter in principle can get surprisingly tricky in practice.

 In the first of two blog posts on the subject we will address the first practical problem:

 How do you tell the difference between chance variation and bias?

 There is always a chance that you will get a sequence of forecast errors with the same sign, purely by chance.

 It's like flipping a coin; there is a good chance that you will flip two heads in a row. It is always possible, purely by chance, that you will get three heads in a row, and four and, although it is much less probable, five in a row. In fact you can never be 100% sure that your coin in biased; that it has two heads or two tails.

 By the same token, you can never be 100% sure that your forecast process is biased. It is a question of probabilities and the balance of evidence.

 So one approach to spotting bias is to choose a given level of probability and do not act until you have evidence that gives you that level of confidence.

 For example, while you don’t want to have to deal with a lot of false alarms, if you have a monthly forecast process you clearly don’t want to wait for a year to find out that you had a problem. How you decide when to hit the alarm button?

The math for this is really simple.

Because the chances of flipping a head (or over-forecasting) purely by chance is 50%, the chances of two heads in a row is 50%/2 = 25%, three heads 25%/2 = 12.5%, four heads 6.25% and so on. So the chances of getting a sequence of either heads or tails is double these odds: 2=50%, 3=25%, 4=12.5%, 5=6.25%.

For a monthly forecast process I would normally recommend sounding the bias alarm when you get 4 errors of the same sign in a row, which will give you roughly 90% confidence that you have a problem. Put another way, you are likely to have an average of only one false alarm a year - a level of risk that seems to be about right.

This sounds reasonable, and it is a particularly effective way of quickly detecting changes in the pattern of bias. But what happens if you have, say, two errors in a row which are very large? Shouldn’t you sound the alarm sooner? And a small error of the opposite sign in the middle of a sequence of systematic errors doesn’t mean that a bias problem has gone way; we know that this kind of thing can happen purely by chance.

What this demonstrates is that, to spot bias reliably and quickly, you need to take account of the size of errors and not just the pattern. But how should the alarm triggers be set? For, example the level of variation in the forecast has a bearing on the level of statistical confidence that you have in a given level of average error, as will the size of the sample i.e. the number of periods that have been used to calculate the average.

So it is clear that the simple approach of setting an arbitrary target for forecast bias won’t work; it will miss many problems and trigger false alarms which, at best will lead to wasted effort and at worst will trigger inappropriate ‘corrective action’ which will make matters worse, not better. What is needed is a way of setting confidence levels for error which takes into account both the number and level of variation in the forecast errors.

Fortunately, this is the kind of problem that is meat and drink to statisticians and there are a wide range of available solutions. My recommendation would be to use a simple tracking signal (http://en.wikipedia.org/wiki/Tracking_signal ), as the approach has been around for many years and is relatively easy to understand and calculate.

Conclusion

Bias is bad, and it is usually the first thing that businesses looking to improve the quality of their forecasts choose to target, with good reason. But while most people intuitively believe that it is easy to spot bias, on closer examination ‘common sense’ approaches based on targeting error are flawed because they cannot reliably distinguish between bias and chance variation.

What is needed is a probabilistic approach to spot bias and this post has outlined two ways of doing this:

  1. Looking at the sequence of errors to detect pattern bias
  2. Looking at the size of errors to detect magnitude bias

Using both methods in tandem minimises the chance of false positives – acting upon which will misdirect resources and degrade forecast quality – and false negatives, which perpetuate waste.

This approach is both robust and practical...at least when we are analysing individual forecast series. But this is only half the battle. Many demand managers are responsible for thousands of forecasts produced on a rapid cycle, and this brings with it a whole new set of challenges which will be addressed in the new post in this series.

 

< Back to Blog

Discuss this post

Comments

 
No comments.
 

Post a comment


You can use the following HTML tags: <a><br><strong><b><em><i><blockquote><pre><code><ul><ol><li><del>


CAPTCHA Image
Reload Image