Visit Our Sponsors |
A conversation with Michael Gilliland, product marketing manager at SAS.
Forecasting is difficult enough without people - sometimes forecasters but usually executives who lack forecasting expertise - making things worse. Michael Gilliland, product marketing manager at SAS, is the author of The Business Forecasting Deal: Exposing Myths, Eliminating Bad Practices, Providing Practical Solutions (Wiley). In an interview at the Institute of Business Forecasting & Planning's Best Practices conference in Dallas, he had plenty to say about the challenges forecasters face.
Q: How would you characterize the current state of business forecasting?
A: Gilliland: For the practitioner, the people actually doing the forecasting and the consumers of forecasting - they are coming to the slow realization of the limits of what forecasting can be expected to deliver to them.
Q: What can it deliver?
A: Gilliland: When you think about it, forecast accuracy is ultimately determined by the forecastability of the demand patterns of whatever kind of behavior they're [dealing with]. Forecastability is largely determined by the volatility of patterns they are trying to forecast. So, with smooth and stable demand patterns, you can forecast fairly accurately with simple methods. On other hand, when we have wild, volatile, erratic patterns, we should not have high expectations to be able to achieve high levels of accuracy for those situations.
Q: In contrast to an oft-heard buzzword, you've often spoken of "worst practices." What do yo mean?
A: Gilliland: We all like to think the things we do are contributing to our organization and are making things better, but until you actually start measuring the impact of things you and your colleagues and your processes do, you don't really know for sure that they are making things better.
The story is really told in forecasting. It's important for organizations to look at and measure every single step in their forecasting process because what we often find is that we're in love with elaborate processes, the big computer models, a lot of data, fancy statistical models and so on, but in forecasting we find these things often don't really make the forecasting any better and often things we do make forecasting worse.
Human beings tend to inflict our own biases and personal agendas in our actions and in an elaborate forecasting process, with a lot of consensus and collaboration and human touchpoints, that can be more opportunity for people to inject these biases and personal agendas and actually make the forecasting worse.
Q: Do you have any examples of that?
A: Gilliland: One favorite of mine is allowing executive management to have final say over forecasting. Typically, executive managers are not trained in forecasting, and they also have a lot of agendas in mind. They want to see a certain number, so often if a forecast comes out that doesn't meet their expectation or is not a number they want to present to Wall Street, say, they may not approve that, or put in a different number, and that may ultimately not be in the best interest of the business. If you put in a number that's way too high, unrealistically high, you're just going to build inventory and potentially get yourself in a lot of trouble.
Another example is the outlier, which is an extreme data point that's much higher or much lower than what you would expect in line with another data point, the historical information you have. One common practice is to just ignore those, just mask them from history, and build our statistical forecast based on the smoother history. The problem is, that makes us too confident in the accuracy of our forecast going forward. Because we somehow assume that outliers aren't going to happen again. That's a crazy assumption. So when we ignore what outliers are trying to tell us, that bad things have happened, strange things have happened in the past and probably will continue to happen in the future, we should just be cautious, not too overconfident in future forecasting, not ignore the message that an outlier tells us.
Q: Are benchmarks helpful in terms of forecast performance?
A: Gilliland: A lot of organizations look to industry benchmarks, but that's a mistake because what we don't know about that organization is, did they have easy-to-forecast demand or difficult-to-forecast demand? How does that forecastability compare to ours?
A better and more appropriate way to set forecast performance goals is to do no worse than a naïve forecasting model. A naïve model is something simple, easy to compute. It's like a moving average or random walk, which is just taking your last observation - if you sold 12 last week, your forecast becomes 12 this week. That' essentially a free forecasting system. You don't need people or fancy systems, you just basically get a forecast for free using a naïve model. So your goal needs to be: do no worse than that. You hope to do better, but no worse. If your processes, your people, your systems, your models are doing worse than naïve forecasting, then why bother? You're just wasting your time.
Q: You advocate forecast value-added analysis, or FVA. Why?
A: Gilliland: Tom Wallace, the thought leader in S&OP, has called FVA the lean approach applied to forecasting. What we're trying to do with forecast value-added analysis is map out the process that you're using. You have your historical data fed into a statistical model. A forecast analysis then reviews and overrides it and it may go on to a consensus or a collaborative process, it may go to executive management to review.
So you measure the accuracy or error of the forecast at each of those steps and see which steps are actually making it better and which ones are making it worse. So FVA is just basic science applied, trying to determine the impact of the steps in your process. The key is to identify and eliminate wasted efforts. This is how it ties into lean. If you find things that aren't making forecasts any better, question why you are doing them. Get rid of them, and you potentially will achieve better accuracy by eliminating things that make forecasts worse. You're spending fewer resources on forecasting.
Q: Any examples of anyone successfully using FVA?
A: Gilliland: There are a number of companies over the last six, eight, 10 years. Big names that have spoken publicly are Intel, Cisco, Tempur-Pedic, Yokohama Tire Canada. So it's been adopted by a large number of companies and often they are finding things they are doing are just making forecasts worse.
The ultimate goal or objective of forecasting, I think, is to generate forecasts as accurate as we reasonably can expect given the nature of the kind of demand patterns that we deal with and to do this as efficiently a possible.
Resource Link:
SAS Institute
To view video in its entirety, click here
RELATED CONTENT
RELATED VIDEOS
Timely, incisive articles delivered directly to your inbox.