How to compare different metrics?

I had been assigned the task to create a system to balance the workforce between the three departments of a Customer Service Center.

These departments are:

  • Call Center (CC – incoming call handling)
  • Back Office (BO – incoming mail handling)
  • Front Office (FO – servicing personal visits)

The goal is to organize the workers in such a way that the daily Service Levels (SL) aren’t hurt in any department. Eg, if there is a peak in the CC’s workload staff from the BO, should be transferred to help the CC.

This sounds nice but how to compare emails with calls? How to compare the CC’s SL definition (service at least 80% of incoming calls under 20 secs of waiting time) with BO’s requirement to answer all the cases before a certain deadline?

Comparing apples to oranges won’t work. Instead, a different level of abstraction was added: the idea is to introduce a model in each department which gives us two numbers: current number of active workers and the number of workers required to meet the performance criteria of each department. Defining such a model for the CC and the FO are relatively straightforward using the Erlang-C formula. For the BO is a little bit more tricky but a prediction module which is aware of the current mail distribution, the man hours still available for today and the following days, and the expected number of new mails for the next X days will do.

Ok, we have comparable metrics for each service unit, how to allocate the workers among them?

For such an abstract problem one thing is guaranteed: the first solution won’t satisfy the end user. According to my experiences, the best way to handle such a case is to use an iterative approach and adopt the model to the requirements not known before implementation. This raises two design goals:

  • Go for the simplest solution first – so the later refinements will be more straightforward, and complexity will be added only if it is necessary
  • Automated testing of the allocation algorithm should be extremely easy: there shall be automated tests for each requirement. Best to wrap the optimization in a pure function (output depends only on input, no side effects)

For the optimization a cost function was defined for calculating the cost of a given setup. The allocation is done by a greedy algorithm which picks the move with the biggest improvement on the global cost until one with positive improvement exists.

After the initial deployment a series of “Why didn’t worker X was moved to Role Y?” sort of questions and a ton of refactors with a continuously increasing test suite, the software finally became a product.

This is a substantially simplified and idealized summary of the real problem without any direct references to the actual project.

Leave a Reply

Your email address will not be published. Required fields are marked *