Contact centers rely on pool of resources ready to help customers when they reach out via call, email, chat, or other channel. For contact centers, predicting volume of incoming requests at specific times is critical for resource scheduling (very short- and short-term horizon) and resource management (mid to long term horizons). It takes time action taken within workforce management framework becomes effective (and is reflected in financial reports eventually), moving people around, hiring, upskilling, or down-sizing pool of resources takes weeks if not longer. Because of this, forecast for longer horizons is needed, starting from one to more months.
To build a high-quality forecast, it is necessary to gather relevant, and valid data with predictive power. In such case it is possible to employ ML technology like TIM RTInstantML that can build models for time series data in fraction of time.
In our sample use case, we will showcase how TIM can predict volumes of requests for the next quarter, for each week ahead.
import logging import pandas as pd import plotly as plt import plotly.express as px import plotly.graph_objects as go import numpy as np import json import tim_client import os
with open('credentials.json') as f: credentials_json = json.load(f) # loading the credentials from credentials.json TIM_URL = 'https://timws.tangent.works/v4/api' # URL to which the requests are sent SAVE_JSON = False # if True - JSON requests and responses are saved to JSON_SAVING_FOLDER JSON_SAVING_FOLDER = 'logs/' # folder where the requests and responses are stored LOGGING_LEVEL = 'INFO'
level = logging.getLevelName(LOGGING_LEVEL) logging.basicConfig(level=level, format='[%(levelname)s] %(asctime)s - %(name)s:%(funcName)s:%(lineno)s - %(message)s') logger = logging.getLogger(__name__)
credentials = tim_client.Credentials(credentials_json['license_key'], credentials_json['email'], credentials_json['password'], tim_url=TIM_URL) api_client = tim_client.ApiClient(credentials) api_client.save_json = SAVE_JSON api_client.json_saving_folder_path = JSON_SAVING_FOLDER
[INFO] 2021-08-17 10:28:15,725 - tim_client.api_client:save_json:66 - Saving JSONs functionality has been disabled [INFO] 2021-08-17 10:28:15,726 - tim_client.api_client:json_saving_folder_path:75 - JSON destination folder changed to logs
Dataset contains aggregated (per week) information about request volumes, temperature, holiday, no. of regular customers, marketing campaign, no. of customers for which contract will expire within next 30 or 60 days, no. of invoices sent, invoicing days, hours open.
Structure of CSV file:
|Sum of Volumes||Sum of all requests in given week||Target||t+0|
|Avg temperature||Mean temperature||Predictor||t+13|
|Hours of public holidays||Public holiday days in given week x 24||Predictor||t+13|
|Hours open||Total hours center was/will be open to requests||Predictor||t+13|
|Hours of mkting campaign||How many hours campaign run/will run||Predictor||t+13|
|Avg contracts to expire in 30 days||Average no. of regular contracts that will expire within 30 days||Predictor||t+13|
|Avg contracts to expire in 60 days||Average no. of regular contracts that will expire within 60 days||Predictor||t+13|
|Avg no. of regular customers||Average no. of active contracts for regular customers||Predictor||t+13|
|No. of invoicing hours||Total hours during which invoice were/will be sent||Predictor||t+13|
|No. of invoices||No. of invoices sent||Predictor||t+13|
We want to predict total volume of requests for the next quarter (13 weeks) for each week. We assume to have forecasted values for predictors available. This situation in data is reflected in values present in CSV file. To simulate out-of-sample period thoroughly (i.e. to use always the latest model for each forecasting), each forecasting situation has its own CSV file reflecting data situation relevant at respective forecasting.
CSV files used in experiments can be downloaded here as ZIP package.
This is synthetic dataset generated by simulating outcome of events relevant to operations of contact center.
# Sample from the first CSV file data = tim_client.load_dataset_from_csv_file('dataL/data2LB1.csv', sep=',') data
|Date||Sum of Volumes||Avg temperature||Avg contracts to expire in 30 days||Avg contracts to expire in 60 days||Avg no. of regular customers||Hours of public holidays||Hours open||Hours of mkting campaign||No. of invoicing hours||No. of invoices|
137 rows × 11 columns
target_column = 'Sum of Volumes' # sum of requests per given week timestamp_column = 'Date'
fig = go.Figure() fig.add_trace( go.Scatter( x=data.iloc[:]['Date'], y=data.iloc[:][ target_column ] ) ) fig.update_layout( width=1300, height=700, title='Sum of Volumes' ) fig.show()