Stitching Together Forecasts

By Ric Kosiba, Ph.D., Genesys

In this space a few newsletters ago, we discussed how to automate your forecasting process. We discussed the regular steps required for developing a time series forecast: outlier detection, determining trends and seasonality, and choosing the appropriate methods, and then proving its accuracy. The gist of that article is that these processes can be automated, and vendors in our space are looking to do exactly that.

In this article, I would like to focus on this last part, the “proving the accuracy” part, as we’ve learned a fair amount about the importance of doing this correctly.

When building a statistical forecasting model, one of the major decisions you have to make is to determine what you mean by a “good model.” In the last article, we glossed over this point, simply referring to it as “error.” That was a bit of a mistake — your definition of error may very well be the most important decision you make when building a forecast.

What is Error and Why Does it Matter?

There are many statistical methods for defining whether a particular forecast is good or bad, and you can look them up. Here are some standard ones (that I pulled out of an article I wrote on this subject here in 2007):

  • Mean Error: This method lets you know whether, on average, your forecast is close to your actual volumes. It also lets you know whether your forecast has a bias above or below actual volumes.
  • Mean Absolute Error: This method gives you a pretty good picture of how varied your forecasts are compared to actual, or how far off you can expect your forecasts to be in either direction.
  • Root Squared Mean Error: This method also measures the amount of the deviation (or error). RSME has a very cool property in that it can, given data with the appropriate distribution, determine confidence intervals.

And there are other equally standard methods. But that isn’t what I want to discuss either. Instead, I want to chat about how your definition of error — from a business perspective — can change the way you should be developing your math. I’ll stop beating around the bush:

A good way to set up your forecasting process is, when backcasting, to define error directly for the business problem at hand. Meaning, if I care most about forecasting for the next few weeks, my error definition should describe the error for the next few weeks only. If I am putting together a long-term forecast for the purposes of hiring, planning, and budgeting, it makes sense to measure error across the year, but, maybe with an emphasis on the operation’s peak weeks. But it may also be important to never understaff — so negative error is more an issue than positive error.

My error definition should define as a good forecast one that measures the thing I care about and nothing more. But this brings up another problem: what if I care about many things?

Multi-Objective Math

One of the more interesting classes I took in college was in a field called multi-objective programming. Its purpose was to describe what to do when you had more than one objective. For example, in contact centers, our goal might be to reduce costs and hit service levels. Wait, those probably are our goals. But we might have more goals — developing plans that have reasonable occupancies or schedules and policies that allow a great work/life balance for our agents.

But it should be pretty obvious that many of these goals are directly contradictory to each other. The best way to lower your costs is to close down customer service. But our customers would hate that, so we cannot do that. A great way to improve agent satisfaction is to allow them to work whenever they want, but then our service would be highly erratic, so we cannot do that either. Multi-objective programming encompasses the art of laying bare the trade-offs through cool math, and is highly interesting.

We have the same sort of math problem when developing forecasts. We, as forecasters/workforce professionals, have more than one business objective. I want to make service level next week. I want to hire correctly. I want to effectively manage the rest of the day. I want to schedule training over the next 12 weeks. And one forecasting technique does not help you with all of this.

Since we want to be very accurate during our short- and medium-term, but we also care about our peak weeks, we may need to build more than one forecast. I know that many companies have short- and long-term forecasting teams, but I would like to propose ways to stitch together different forecasts with different objectives.

Symphonies and Orchestras

I always forget this word, ensemble, and go straight to symphonies and orchestras. An ensemble is the math around how to take different forecasts built using different forecasting methods and bring them together into one combined ensemble forecasting method.

There is a fair amount of literature in math community about ensembles — how multiple forecasts mixed together may produce better results than single forecasts. This makes sense — averaging or weight-averaging different competing forecasts may reduce the variability of your result. But I am talking about something slightly different: creating an ensemble where the weighting changes in the area that a specific forecast is developed to be good at.

If we define error as the deviation in my hold out sample over the next 6 weeks, then I will develop a good 6-week forecast. Let’s call it “short term forecast.” If I develop a forecast where error at peak is my forecasting goal, then we should have a good peak forecast. We’ll call this our “peak forecast.” And if we have a forecast that is good, on average, for the next 12 months, because we defined our error in our model to measure average error over the course of a year, then we should have a great forecast week over week for the next 12 months. We’ll call this our “12-month forecast.”

We could stitch these three forecasts together a bunch of ways, but the math literature says that it is better if we average them together somehow. So how about this?

Over the next 8 weeks, our forecast will be:

  • 80% * (short term forecast) + 15% * (12 month forecast) +5% * (peak forecast)

And during our traditional peak weeks we use:

  • 5% * (short term forecast) + 20% * (12 month forecast) + 75% * (peak forecast)

And for the rest of the forecast we use:

  • 10% * (short term forecast) + 80% * (12 month forecast) + 10% * (peak forecast)

The art, of course, is figuring out the weights and the date ranges of the various forecasts, but that’s pretty fun to play with. One thing that will result is a forecast that encompasses multiple objectives. This example I used may seem overly simple, and it likely is, but it is an attempt to illustrate the point. One method may not be good enough and it may make sense to stitch forecasts together.

Ric Kosiba, Ph.D. is a charter member of SWPP and Vice President of Genesys’ Decisions Group. He can be reached at Ric.Kosiba@Genesys.com or (410) 224-9883.