Advanced Topics

Simulation Modeling for Contact Centers

By Ric Kosiba, Vice President, Interactions Decisions Group, Interactive Intelligence

Something very cool has been happening in our industry on the algorithm front. Our contact centers have gotten so complex—multisite, multiskill, multichannel— that the old methods (Erlang C and “assumed occupancy” calculations) no longer accurately predict staff required or service expected. This complexity is driving the development of new, more robust and all-encompassing models for planning.

The only commonly known mathematical method available to evaluate our complex contact centers are forms of discrete event simulation  modeling. In essence, these models answer a deceptively simple question: What service will we provide, given volume, handle time, and staffing scenarios?

In an omnichannel world (the new buzz word), combinations of call volume, back office volume, email volume, outbound lead lists, chat sessions, or social media responses are becoming increasingly common. All of these may be handled by the operation with different priorities and certainly different handle times. Performance metrics may also vary across channel types—certainly service levels and abandonment rates make sense to measure for inbound call centers, but don’t make sense for email, back office, or outbound call types. This is a highly complex operation, and evaluating, scheduling, or planning for this operation is a correspondingly complex analytic problem.

What is simulation modeling and why is it important?

What is discrete-event simulation modeling? A simulation model is simply a computer that mimics an operation or an activity. Probably the most commonly used simulation model is a computer game. My son, who at 15 may be one of the most prolific of simulation “analysts,” plays the Madden football computer game constantly on his Xbox. This game is a terrific simulation of a football game.

This simulation is constantly evaluating a series of discrete events (hence “discrete-event simulation”). When my son pushes the “pass” button on his game controller, the simulation will quickly calculate the odds of a successful pass using a series of attributes of the game at that moment: How far downfield is the receiver? How close is the defender to the receiver? Is the quarterback being pressured? Does the receiver have good hands? Each of these attributes of the game (and others) figures into a probabilistic model that determines the odds of a catch.  The simulation model will “roll the dice” using a random number generator to determine if this catch is made or not; if the odds are high, the computer will likely generate a “catch.”

Simulation modeling is also our best method for analyzing complex networks. Like the Madden football game, it involves understanding the important attributes of our real-world  environment, and measuring and utilizing these real-world attributes in a computer model. By explicitly drawing out our network and our contact center efficiencies and behaviors, we can accurately model and predict performance under different scenarios and forecasts.

This allows us to determine how contacts are likely routed and the service levels expected in a multisite, multichannel, and multiskill environment under different staffing scenarios. This is critical to do well. If we can accurately understand the service repercussions of various staffing  scenarios, we can then develop efficient schedules and plans. If we cannot accurately know how contacts will be routed or accurately determine service levels and abandon rates, then we simply cannot put together efficient schedules or plans; we are only guessing.

How does simulation modeling of a contact center work?

Simulation modeling of contact centers is about  developing a computer imitation of the contact center network. An accurate simulation model will understand which staffing groups can handle which contacts, and it will understand that different staff groups may have different handle times (even for the same contact type). It will understand the priorities of contact routing, it will understand the staffing efficiency of each staff group, and it will understand the customer patience of every call type.

How does the simulation model know these sorts of things?

The simulation model uses mathematical  distributions of behaviors. This sounds complex, but it really isn’t. For example, handle times are not static. If we forecast a 200 second handle time, we know that not every caller will have exactly a 200 second handle time; each customer’s handle time will vary. Because we have great ACD data, we can actually draw the distribution of handle time (by center and staff type) by plotting individual handle times together on a graph. What percentage of handle times was between 190 seconds and 195 seconds? How many handle times were between 195 seconds and 200 seconds?

In a similar way we can draw distributions of customer patience (a cumulative time to abandon graph), contact arrival and inter-arrival distributions, outbound  probability of contact distributions, staff available distributions, and other important distributions that describe the contact center behavior. For contact centers, most of the data needed is available—it simply needs to be compiled correctly.

Simulation models use this information by generating random and representative customers and random and  representative phone agents. These computer-simulated agents and customers have all the attributes that we see in our our real-world contact center data, including the routing structure and rules.

The computer then simulates the contact center and describes what would happen to our customer service (say, service levels, capture rates, or abandon rates) under varying
contact volumes, handle times, and staffing scenarios. The computer pretends to be a call center, just like ours.

How do you know the simulation model is working?

One of the very first lessons when taking a simulation class is about model validation. Scientists and engineers know that computer models are “garbage-in, garbage-out” tools and need to be proven to be accurate, whether it is simulation or the old-fashioned Erlang C calculation.

Model validation is how scientists prove to themselves and the world that a model works and is worthwhile.

The process is pretty simple; the model is tested on contact center ACD data, where the operational performance results are known. For contact center models, we have a lot of great data—ACD data is perfect for model validation. A validation works like this: take a time period that makes sense for your model. For example, if you are validating a traditional workforce management model, this might be 15 minutes; if you are validating a capacity planning model, the relevant time period would be a week. Take as much ACD data as you can gather, and simply plug the known, historical handle times, contact volumes, and staff available into the simulation model and check whether he service predicted by the model matches the service  achieved in the contact center.

There will be error, but the error can be evaluated over many time periods and any bias can be noted. Is the model accurate? Does it track during high  service periods and low service periods?

This simple step is the most basic step that professional mathematical modelers take whenever we build a model of anything important (I imagine even Madden Football has some form of validation process—for example does Calvin Johnson, a.k.a. Megatron, with Detroit consistently perform in the top 10% of receivers as he does in reality?).

How often have you seen a validation of your workforce management system or your capacity planning system? If your vendor is not providing you with a validation, you should perform it yourself. It is very important to know whether your underlying mathematical models are true.

I’d like to point out a contact center truism: every contact center is different. I have been in the business of building these sorts of models since I was a young man, and these models are not “one size fits all.” A support center is different from a reservations center which is different from a  collections center. A support center is different from its competition’s support center. All those distributions we discussed are different for every single contact center in existence. One  size does not fit all. Please know that if your WFM system touts discrete-event simulation modeling, then the model builders should have developed a validation, or they are likely not  really using such a model. Test the accuracy of your workforce management or capacity planning system, if your vendor doesn’t.

If your models are accurate, then publish the results around your company. If your executive team can be shown that the underlying process in your planning tool validates when service is great, when service is lousy, and everywhere in between, then they will have confidence in your planning process, the analyses that it generates, and the analyst (you) that provides them important results.

As a quick aside, for most contact centers, the old Erlang C equation does not prove to be accurate. We’ve developed validation graphs routinely over the years on call center data, and it consistently overstaffs by three percent or more (it never understaffs as it is biased on under-predicting service).

Back to simulation:
Cool things you can do with your models

The purpose of all of this is to ensure that our extremely complicated contact center networks have the “appropriate” resource levels. Staffing complex networks, with seasonal and variable handle times, volumes, sick time,  agent attrition, contact rates, probability of sale, unscheduled absence, etc… is not an easy problem, and is certainly too hard to do well if the technology is a simplified spreadsheet.

What is “appropriate?” Therein is the crux of the problem, and simulation modeling is the only technology known to help.

Contact simulation models exist to evaluate different center
scenarios. Because there is variability in contact centers, we know—almost by definition—that forecasts will often be off and contact center demand (and supply) will fluctuate. One of the  best uses of contact center simulation models is to evaluate the risk to the operation of contact center variability. For instance, with a simulation model, it is pretty easy to answer these sorts of questions:

  • “My customer service agents who process inbound calls and emails are seeing a 30% surge of email traffic. How many agents should we train? Should we have an email only team?”
  • “Our power/insurance company’s customer support team gets hammered with calls any time there is a storm, power outages, or changes in billing. We are always caught off guard. How do I staff so that I have the flexibility to cover these unexpected events?”
  • “My boss is considering a hiring freeze. What will that do to abandon rates and sales over the next year?”
  • “Our bank sees significant volume any time a politician gets on the TV to discuss potential new mortgage programs. How do we staff for this?”
  • “I manage a reservations network. What should our service levels be? Should they vary across the year as fares change?”

I could go on and on. There are many benefits that we will all see as simulation modeling becomes more prevalent for staff planning and workforce management. First, because the  modelers and users of these systems have proven their model’s accuracy through validation, the models will not have the overstaffing bias that Erlang and other simpler methods have. Simulation brings a significant cost savings due to more exact staffing.

Second, the contact center can be better evaluated for basic contact center policies, like appropriate service goals, how to best cross skill, or the best hiring versus overtime decision. Finally, moving to simulation technologies will change the purpose of workforce planning, and it will change the relationship between decision makers and planning analysts. Simply, the ability to have provably accurate scenarios evaluated quickly will mean that the analysts will become the trusted advisor to their contact center executives, and will have a seat at the decision-making table. That’s you!

Ric Kosiba, PhD is a charter member of SWPP and Vice President of Interactive Intelligence’s Interaction Decisions Group. He can be reached at Ric.Kosiba@InIn.com or (410) 224-9883.