Seven Sins of Demand Planning

On the first afternoon, it could be summed up as, “Oh father, we have sinned.  Please forgive all of us sinners. “

This conference in Dallas was a good time for me to reflect on the history of demand planning.  IBF celebrated their 30th Anniversary in Dallas without even a party.  I give thanks for IBF and for the vendors like Autobox, John Galt, Logility, SAS Institute, and Terra Technology that support their events. I think that we owe them a debt of thanks for continuing their advancement of demand planning excellence.  In my opinion, the greatest sin of all is that we have spent thirty years developing forecasting processes that are largely not used or trusted by the organizations that they serve.  Here, in this blog post, I share my reflections on the group’s discussion on sins….

The Seven Sins

The group discussion included these seven deadly sins:

Sin #1.  Not Using the Statistical Forecast to Drive Continuous Improvement. I have never worked with a company that could not improve its forecasting through better use of statistics.  However, most companies are skeptical.  Inherent in the DNA of the firm, there are “experts” that believe that they know the business better than any statistical package ever can.  Given that a forecast is always wrong, and the forecasting process is fraught with political issues, companies struggle with how to use and gain acceptance for statistical forecasting.

While benchmarking the forecast is difficult (reference blog post Trading Places), measuring continuous improvement through Forecast Value Added (FVA) analysis is a helpful, and easier method, to drive continuous improvement.  In most FVA analysis presentations that I have seen lately, the statistical forecast is improving the naive forecast—forecast made based on prior month’s order history—by 3-5%.  Similarly, the lack of control of managerial and discipline in the consensus forecasting process is reducing forecast accuracy by 2-5%.  The technique allows companies to measure, improve and better drive forecast accuracy, and gain business alignment and support for the effort by dollarizing the impact of the forecast error.  For example, one of the speakers at the conference shared that a 2% improvement in forecast accuracy was worth two headcount in his business.  If the forecast could be improved by 2%, he could reduce the time spent on order expediting.  Bottom line:  Don’t look at forecast accuracy in isolation.  (For those of you not familiar with the technique, I think that the white paper written by SAS is very useful.  Reference

Sin #2.  Only owning part of the forecast. To use a baseball analogy, most demand planning teams are in the “outfield.” They “catch the forecast” from sales and marketing without owning the entire process.  They catch and throw the forecast across functions without value-added analysis.  Whereas, best in class teams, own the entire forecast. They know the baseline forecast and work on driving root cause analysis to improve demand shaping programs – price, promotions, marketing events, new product launch, and sales incentives.  What does the difference look like?  For one company that I worked with over the past two years, this change was worth 5 million dollars in the reduction of obsolescence.  Bottom line: Move out of the outfield and back to home plate to throw the ball to ensure that the organization can hit homeruns.

Sin #3.  Misuse of Downstream Data as an Input. When running out a product—to prevent obsolescence—be careful in the use of downstream data.  Realize that you are pushing into the channel and that you do not want to drive replenishment.  If you don’t have this discipline, you will recreate the Green Volvo Story.  Remember that one?  Hau Lee tells the story, “Volvo was awash in chartreuse green cars. Despite trying every option at the distributor to push the cars, but the cars were not selling.  So the company decided to price them at a significant price reduction to move them and reduce inventory.  However, this strategy was not communicated across the organization to demand-planning.  As a result, when the green Volvos sold, the sales orders triggered a forecast and the forecast consumption logic triggered replenishment and the factory cranked back up the production lines to make green Volvos.  I was telling this story a couple of years ago to a company that made women’s intimate apparel, and they started laughing incessantly.  I finally stopped and asked why?  In between uncontrollable laughter, the company shared that their Green Volvos were leopard skin fur thongs.  So this sin goes across all industries from cars to lingerie….

When pushing SLOB, turn off the knob to use downstream data, and be careful to not let orders drive replenishment. Likewise, downstream data should be used to trigger the completion of promotional replenishment.  Sensing when to end a promotion is also essential to eliminating SLOB (Slow and Obsolete Inventory).  Bottom line:  Design the forecasting process and the use of the output of the forecasting process from the outside-in.  In driving accurate replenishment, there is no substitute for knowing true channel behavior.

Sin #4. A Project not a Program: A frequent question that I am asked is “how can I implement demand planning faster?” I will answer the question, but then I will ask,
“Aren’t you shooting for the wrong goal?  Shouldn’t your goal be to implement demand planning well not fast?” One of the companies that I admire, that has proven year over year to be one of the great leaders in the use of SAP APO DP is General Mills.  When I wrote a case study of General Mills implementation as an AMR analyst, many companies pushed back and asked why I picked the General Mills case study to showcase.  The reason was simple.  They did not implement demand planning the fastest, they did it the best.  For them, it was a program.  It was valued.  They wanted to get it right. It was not a project to quickly implement.

Sin #5.  Not all Items are Created Equally: In the words of one participant in the workshop, “get to know the DNA of your item.” A few years ago, I was working with a company that made baby formula.  Their most important and the lowest volume item was samples sent to the hospitals for new mothers.  These samples were distributed on maternity wards at the birth of the baby to promote product trial. A successful trial could drive a couple of years of consumption through the life of the child through their years as a baby. So, a forecast error on these products was worth substantially more than a forecast error on turn volume.

Sin #6.  Forecast with the End in Mind: This may sound simple, but it is a sin that is frequently made.  While many companies have set up their forecasting systems to forecast what manufacturing needs to make when, the greater opportunity is to model what the channel is going to sell and when.  The company then translates these demand requirements to internal and external manufacturing locations.  It is not as easy as just modeling the selling unit at the retail chain level.  This is usually too low of a level to forecast –insufficient data to be significantly relevant—for the forecasting process.  Likewise, with this increased need for transportation forecasting visibility, there is a need to forecast transportation requirements; and, to use channel data to determine distribution requirements.  It is a proven fact that forecast consumption logic and one number forecasting is not sufficient.  Instead, multiple forecasts need to be translated into a demand visibility signal for the corporation.

Sin #7.  Arrogance.  Not serving the Organization. At the conference, the SVP of Radio Shack gave a presentation on what makes a great demand planning group.  His words of wisdom were “be humble” and “serve the organization.” In his experience, when the demand planning groups become arrogant—a “know it all group” that polices the forecast—everyone looses.

What do you think?  Do you have any sins of forecasting that you would like to share with the readers of this blog?  Or do you have any insights on the sins outlined and thoughts to share on how others can improve their forecasting? For more on demand management, check out these posts:

Beyond Smoke and Mirrors (

Trading Places (

This week, I am speaking at the Midwest Health Care Exchange.  Look for a post next week on new research on How to Heal the Life Sciences Supply Chain.

Yesterday, I had a great day at the Eli Lilly learning center looking at progress in item serialization with 2-D barcodes at their center in Indianapolis.  It was great to see their use of some of my 2002 Gartner research note on RFID used to help them formulate a winning strategy.

Lora Cecere

Author Lora Cecere

Lora Cecere is the Supply Chain Shaman. A shaman interprets and connects an evolving world to a group of followers. Lora does this for supply chain. As the founder of Supply Chain Insights and the author of Supply Chain Shaman, Lora travels the world to chart the course of supply chain practices and disruptive technologies. Her blog focuses on the use of enterprise applications to drive supply chain excellence.

More posts by Lora Cecere

Join the discussion 12 Comments

  • Charles Chase says:

    The real sin is that after 30 years nothing has really changed at all. Companies are still primarily using shipments (replenishment data) to create a statistical baseline forecast reflecting only trend and seasonality, and then, handing the statistical baseline forecast over to others within the organization to add their bias judgment. It’s like lining up ten kindergarten children in a row and whispering a sentence in the first child’s ear and then telling him/her to repeat the sentence to the others. By the time it gets to the tenth child’s ear the sentence doesn’t remotely resemble the original sentence. In this case, the final demand forecast doesn’t remotely resemble the original statistical baseline forecast, which are nine-out-of-ten times more accurate. In fact, a study was conducted several years ago by Goodwin and Fildes (Bath University in the UK) and published in the Fall 2007 issue of Foresight: The International Journal of Applied Forecasting that found 85% of the time when someone raised the forecast using judgment they made the forecast less accurate because they are overly optimistic when they raise the forecast. Subsequently, 85% of the time when they lowered the forecast they actually improved the accuracy because they are more conservative when they lower the forecast. However, 95% of the people who touch the forecast actually add no value at all because they make such minor adjustments to the forecast there is no real impact on the final outcome. When they asked them why they were touching the forecast they all said, “if we don’t touch the forecast we’re not doing our job”. By touching the forecast they were doing a disservice to the company, and could have been more productive doing something else.

    In fact, a large majority of companies that we work with are all looking for an automated “Easy Button” when it comes to demand forecasting. However, there is hope. We recently visited a large CPG customer who has been using SAS Forecasting technology since 1994 and is considered the best-in-class division within their global company. They have in place a very structured process that relies on analytics and FVA (Forecast Value Added) to drive the demand forecasting and planning process. They have two other divisions in the US who are also SAS customers. They have implemented our advanced forecasting analytics software to sense demand signals associated with sales promotions and marketing events, and then, using “What If” Scenario analysis shape future demand based on the unit and revenue (profitability) of those sales promotions and marketing events. If a sales promotion or marketing event doesn’t pay out based on the analytics they are required to cancel the event or reconfigure the pricing and costs to make it profitable. In my opinion, the greatest sin of all is that we have spent thirty years developing forecasting processes, implemented more advanced analytics, and have better POS (point-of-sale) data and only a handful of companies are using it because it’s not trusted by the organizations that they serve.

    When I worked for a large CPG company who manufactured and sold tooth brushes they decided to do a “buy-one-get-one-free (BOGO) promotion, without thinking about the long term effects of what the sales promotion would create by giving away just one free tooth brush. They were only concerned with short-term unit volume and trial. Well most people go to their dentist twice a year for a check-up and cleaning. When they leave the dentist he/she gives them a free tooth brush. On average people use four tooth brushes a year, and purchase two tooth brushes from their local retailer. So, by running the BOGO sales promotion the company put their consumers out of the market for close to two years. Meanwhile, their competitor came out with a tooth brush with a blue stripe down the middle, and advertised when the blue stripe fades you need a new tooth brush. Well, it was pretty ironic that those tooth brush’s blue stripe faded in three months.
    It really boils down to politics (power and control), and the real purpose of demand forecasting and planning process. Also, miss alignment of performance metrics. When I was a C-Level manager at a very large global Beer company, one of my directors came to me and said we had a great forecast last month. The forecast was 92% accurate. I said so what! My director then said, what do mean, you are the forecasting guru. I replied, yes that was a great forecast, but as a C-Level manager I don’t get measured based on forecast accuracy. I get measured based on revenue, profit margins, customer service, cost of finished goods inventory, and on-time delivery. So, I explained that he needed to create a balanced scorecard that graphically and tabularly connected forecast accuracy to all those key business indicators that C-Level managers get measured. This way we will see how forecast accuracy drives all those indicators and then, you will have our attention. In 98% of companies we visit none used balanced scorecards that are connected to forecast accuracy. In fact, recently we visited a company where the demand forecasting team’s forecast performance metric was based on the aggregate level (top level) error rate. When we asked what was the lower product mix accuracy rate they said that they didn’t know or care because they only get measured on the aggregate level forecast error.

    The real sin is that on average a 1% increase in forecast accuracy equals at minimum a 2% reduction in finished goods inventory safety stock. This is based on our customer’s feedback and surveys we have conducted over the past several years. This is huge if you are carrying $60-$100M of finished goods inventory. We have also seen that on average most companies can improve their forecast accuracy using analytics by as much as 10%-20% in the first year. So, if you do the math that’s a lot of cost savings.

    Remember, developing a best-in-class structured demand forecasting planning process driven by data and analytics is a journey. It doesn’t happen overnight. It takes discipline, resources, skills, and technology. Even our Best-in-Class CPG customer will tell you that they’ve been working on it since 1994, and are still looking for better processes, analytics, and technology to further enhance their demand forecasting and planning process.

    • Lora Cecere Lora Cecere says:

      Yes, Charlie, I agree, although most of the FVA analysis is only showing a 2-5% improvement based on statistics, but even that could have a signficant impact. Unfortunately, there is no “easy button”.

  • […] Rickard of Newell Rubbermaid, and industry analyst Lora Cecere of Altimeter Group. Check out Lora's Supply Chain Shaman blog for an extensive account of our worst practices confessional. (See also my colleague Charlie […]

  • Kiran Gajiwala says:

    Thanks Lora for detailed & useful information provided here. Many says demand planning is an art or others are saying it is a science. But still we are yet to find role of forecasting in manufacturing unit when strategy is make to order scenario. In MFG unit we generate statistical forecast against historical demand then sales man do correction forecast based on his feedback from customer. We generate first unconstrained supply plan & then constrained supply plan based on consensus forecast & orders available in system. After running several scenarios the month plan got finalized which has two parts one is firm orders & another is forecast which in turn convert to Available Capacity to promise or Available to Promise (ATP). But due to higher variability on supply & even on demand side our accuracy is too poor which leads to move allocation of ATP from one customer to another . This forces us to think on how much use of Forecast in MFG environment where make to order strategy is followed.
    What do you suggest.

    • Lora Cecere Lora Cecere says:

      hi Kiran
      In a make to order environment, the forecast needs to tie to the sourcing of materials and the building of sourcing relationships. The forecasting process can be improved by looking at the contract pipeline and the status of the inbound sales negotiations.

      • Charles Chase says:

        I agree with Lora. However, I do believe you can always improve your forecast accuracy using data, analytics, and domain knowledge.

        Are you really packaging to order, rather than making to order?

        Many companies who think they are making to order are actually packaging to order. In a package to order manufacturing evironment you still need to forecast raw materials and WIP, which means forecasting demand is still critial to manage the supply chain (manufactuirng). The only true make to order company is a company like Boeing who bids on contracts, then builds the ariplanes over a several year period. They don’t carry finished goods inventory or even package to order due to the costs of raw materials and WIP. Even in that situation Boeing still forecasts market conditions that would prompt an airline like American to purchase new planes. They also predict the risk of getting or losing the contract. On the other hand, a company like Dell packages to order. Dell needs to forecast all the raw materials and WIP, and as orders come in they package (assemble) the laptops. The raw materials (components) and WIP (semi build boxes) are forecasted and waiting to be assembled. Many of the laptops have common components making it easier to package (share common components) to order. Even in this environment Dell strives to forecast all the different products they package to order.

        Now many companies like Dell who package to order also use demand shifting techniques to compensate for poor forecasting. For example, if you go online and order a M124 laptop and the components are not available to package it, the computer manufacturer will promote their M134 with more features at the same price to attempt to shift demand. This is called shifting demand at the point-of-sale. You can also shift demand at the point of supply by negotiating during the S&OP process with Sales/Marketing to move out sales promotions and marketing events to shift demand into the future to provide time to manufacture supply.

        I don’t know your buisness, so it is difficult to provide more suggestions.

  • Rebecca says:

    Hi Lora,
    Can i get General Mills’ case study from somewhere?


  • Joe says:

    In his experience, when the demand planning groups become arrogant—a “know it all group” that polices the forecast—everyone looses.


  • Shaun Snapp says:

    Speaking to the first point, there is a major disconnect regarding how the sales forecast and the statistical forecast should be combined. This has become increasingly obvious the more clients for which I perform forecasting improvement projects. For instance, very few companies segment their product location database into those where sales/marketing improves the forecast and those where they do not. Every company should know this basic information, but to know it, one has to test, and then record and publish the results.

    Over the past several decades there has ben a focus on getting specialized forecasting systems into companies that eventually found out how lacking the forecasting functionality in ERP systems generally was. The overestimation of the forecasting functionality within ERP systems was a major piece of misinformation on the part of ERP vendors. This idea that one system could do everything set forecasting back considerably. However, even though many companies have some external forecasting application, the forecasting processes are still remarkably uninformed as to the forecasting processes that I have seen work.

    Furthermore, the human side of the equation is so underfunded. Very few companies even have a career pathway to appropriately compensate those that want to, or simply have a natural predisposition to become deeper in forecasting knowledge. At most companies the only real pathway to advancement is to move into management — so that managing demand planners is deemed more important than being an excellent demand planner.

    I keep hearing how important the forecast is to companies — but where is the investment beyond purchasing an application? I have been trying to quantify the costs of forecast inaccuracy recently, to provide a “bill” that the company incurs every month for poor forecasting. If the approach to quantifying what forecast inaccuracy costs became broadly applied and understood, perhaps the money for funding the entire forecasting process appropriately would be more easily obtained.

  • […] the season, but sum-total demand does not necessarily translate into demand for your product.  Focus on your product in your market and you’ll avoid demand-sensing overload. British regulars suffer heavy losses because colonial […]

Leave a Reply