The storyline is old. This blog is a new take on an old story.
The storyline was the central theme of the 1983 American comedy titled Trading Places starring Dan Aykroyd and Eddie Murphy. Remember it? It was one of my favorites: a funny movie where an upper class commodities broker and a homeless street hustler switch roles when they are unknowingly made part of an elaborate bet.
It is an ageless one where a less fortunate character trades places with a more fortunate. As a child, I was enthralled as I saw it play-out in Mark Twain’s Prince and the Pauper and Disney’s Parent Trap. While these are fictional, this week, I found a story where it happened in supply chain in real life. Some of my favorite supply chain management leaders –organizations that I have worked with over seven years—unknowingly traded places in organizational capabilities to forecast demand. Here I share what made the difference.
Before I tell the story, let me share a quick perspective on what I have learned on benchmarking demand metrics. I have been working in this area for seven years. It is one of the hardest areas of the supply chain to benchmark. Of all the supply chain metrics, it is the BAD APPLE; however, for most companies, it is the most leveragable metric making it the BIG APPLE.
While companies eagerly want demand data, and they want to improve their processes, benchmarking forecast accuracy is difficult. Why is it so hard? Let’s start with the two major reasons:
Reason #1. It is Hard to get Apples to Apples. It is a Fruit Basket. Every company does it differently: different hierarchies, different frequencies, and different measurement systems. It is the most inconsistent area in the supply chain to benchmark.
When doing this type of work, it is essential to have an apple to apples comparison. To do this, you need to look closely at five variables: frequency of the planning, granularity of the planning (E.g. Monthly, weekly or daily planning), the construct of the data model (E.g. what is modeled), the input into the data model (E.g. shipments, orders, channel data), and the drivers of demand forecasting variance (E.g. promotions, seasonal builds, etc.) To get it right, the data must be scrubbed and normalized to ensure an apple to apple comparison. As a result, companies should never accept data from self-reported sources (E.g. APICS, IBF, APQC, SCOR Council, and most industry surveys).
Reason #2. The Apple doesn’t fall far from the Tree. The second reason is that it is hard to get. To be useful, and since market conditions change, the data set needs to represent a like peer group from the same point in time. Since many companies have multiple supply chains, and competitors tend to not want to share data directly with their competitor, getting the data is quite a feat.
I ran into Robert Byrne, CEO of Terra Technology this week, and I was excited to find that he had just finished a project to benchmark demand data for consumer products companies that he had worked with in deploying his software solution. Five of the companies were organizations that I had benchmarked in 2003 and worked with over the past five years. While, neither Rob nor I can share the names of the companies, I would like to share my insights on their journey. It is truly a story of Trading Places.
While this story may not be as much fun as the original movie trading places, it is a real story where a focus on supply chain basics made a difference. In Table 1, I show the relative positions of the companies in the two analyses:
Table 1: Comparison of Five Consumer Products Companies Forecast Accuracy
|Company Designation (Monthly Forecasting at an Item/Ship From Level at a 30 Day Lag)
|2003Relative Ranking of Forecast Accuracy
|2011Relative Ranking of Forecast Accuracy
|Organizational: Regional versus Global Focus
|Matrix Organization with a change in Reporting through Go-to-Market Teams
|Centralized with a Strong Focus on Analysis
|Strong Regional Focus
|Matrix’d Organization with Global Reporting through Supply Chain
|Centralized with Strong IT/Line of Business Partnering
Has the industry made progress? Yes. Some,but not a great leap forward. For the group of companies that Terra Technology benchmarked, the average monthly Mean Absolute Percentage Error (MAPE) for a one month lag was 31% + 12%. Data eight years ago for the same companies was an average of 36% + 10% MAPE. The result? This group of consumer products leaders has gotten slightly; but not significantly better in demand forecasting. They have weathered the storm of market changes that could have made the forecast FAR worse. While few people in their organizations are giving these leaders pats on the backs (demand planners are used to getting kicked), I expected the results to be FAR worse. The industry has experienced major shocks. The list is long but includes shorter product lifecycles, product proliferation, higher levels of promotions, changes in competitive behavior, and global expansion.
Trading Places. What made the difference? Three elements drove the difference in relative position: organizational reporting, process discipline and the use of data-driven processes.
Trading Places. What did not make a difference? The type of software used for tactical forecasting did not make a statistical difference. It is the USE of the software rather than the SELECTION of the software that made the real difference.
Organizational reporting. The company that had the worst performance in Rob’s benchmarking and the best performance in 2003 introduced a very high forecast bias due to a change in forecast reporting relationships. The company made a decision shortly after the benchmarking in 2003 to have the forecasting group report through sales where there was a pervasive belief in the organization that if the company over-forecasted that sales would be higher. This decision increased bias and cast a cloud over the process. The lack of a “true North” in the organization became a stumbling block to improving forecast accuracy.
Process discipline. Better math? In the new Terra Technology study, the use of statistical modeling software improved the forecast 3% on average (on a MAPE level with a 1 month lag) when compared to a naive forecast (volume planning where the forecast is based on what was shipped last month). For the leaders when they used the math, it made a difference. In the top quartile of customers, the impact was 2X or a 6% improvement in MAPE. What is a 6% improvement in forecast accuracy worth? Based on AMR Research correlations, a 6% forecast improvement could improve the perfect order by 10% and deliver a 10-15% reduction in inventory. The greatest impact is seen in slow moving items on the tail of the supply chain. Unfortunately, most companies let their supply chain tail whip them around.
It doesn’t just happen. It must be data driven. Basics matter. For me, the interesting story underneath the data is the switch in position of the players over the course of the eight years. In this time period, the best in Class Company in 2003 became the worst performer and two low performers propelled themselves forward. These companies focused hard on the basics. This included efforts to clean data, identify an accurate baseline forecast, frequently tune supply chain planning software, a strong corporate demand planning team reporting through supply chain and the use of the statistics.
Thoughts on tactical forecasting: While technology vendors like to brag about the use of their technology making a difference in supply chain leadership, the data here is inconclusive. Instead, what made a difference in relative position was all about process, data and organizational reporting. I know, not the sexy stuff, but when it comes to tactical forecasting, the basics matter. And, while many companies think that they can overcome the deficiencies of a bad forecasting through shorter cycle times, this is short-sighted. The biggest advantage for the great forecasters are in improving tactical decision making (long range planning usually from 3-18 months) to invest in the right manufacturing asset strategies, sourcing and commodity hedging plans and long-range planning with carriers. Companies that do not do this well are pushed to always react. They are forced to always be on the “back foot” with a serious impact to costs, customer service and inventory turns.
The data also supports the fact that tactical forecasting by itself is not sufficient. The design of conventional supply chain software where the tactical forecast is consumed through rules-based consumption is deficient. The work by Terra Technology in developing demand sensing capabilities improves the forecast by 15-33% (based on client interviews) to improve supply chain decision making in the operational horizon (weeks in the forecast duration of 3-12 weeks). This is a difference that matters for deployment, inventory and manufacturing planning.
Wrapping it Up
I commend Terra Technology for spending the energy and the manpower to benchmark their client base. This type of commitment to the client base differentiates and creates long-term relationships. I am also excited that Chainalytics is starting a demand planning benchmarking practice. It is my hope that this type of analysis will be able to be part of continuous efforts for supply chain leaders.
I look forward to seeing your insights. What do you think? Did I miss anything?