Contact centers rely on pool of resources ready to help customers when they reach out via call, email, chat, or other channel. For contact centers, predicting volume of incoming requests at specific times is critical input to resource scheduling (very short- and short-term horizon) and resource management (mid to long term horizons). For short term forecasts, typical task would be predicting volumes for the next 7 days, hour by hour. High quality forecast would bring confidence that FTEs (full time equivalent - indicates workload of an employed person) planned for the next week are just right for delivering on SLAs. Not to mention other benefits, such as higher confidence when planning absence (due to vacation, education etc.), or improving morale of employees who would not face overload from "sudden" volume peaks.
To build a high-quality forecast, it is necessary to gather relevant, and valid data with predictive power. In such case it is possible to employ ML technology like TIM RTInstantML that can build models for time series data in fraction of time.
We will showcase how TIM can predict volumes of requests for next 7 days on hourly basis in our sample use case.
Business objective: | Reduce risk of resources shortage |
Business value: | Optimal resources planning |
KPI: | - |
Business objective: | Reduce risk of not meeting SLAs |
Business value: | Better customer relations, lower/no penalties |
KPI: | - |
Business objective: | Reduce effort on forecasting |
Business value: | Gain capacity of high skilled personnel |
KPI: | - |
import logging
import pandas as pd
import plotly as plt
import plotly.express as px
import plotly.graph_objects as go
import numpy as np
import json
import tim_client
with open('credentials.json') as f:
credentials_json = json.load(f) # loading the credentials from credentials.json
TIM_URL = 'https://timws.tangent.works/v4/api' # URL to which the requests are sent
SAVE_JSON = False # if True - JSON requests and responses are saved to JSON_SAVING_FOLDER
JSON_SAVING_FOLDER = 'logs/' # folder where the requests and responses are stored
LOGGING_LEVEL = 'INFO'
level = logging.getLevelName(LOGGING_LEVEL)
logging.basicConfig(level=level, format='[%(levelname)s] %(asctime)s - %(name)s:%(funcName)s:%(lineno)s - %(message)s')
logger = logging.getLogger(__name__)
credentials = tim_client.Credentials(credentials_json['license_key'], credentials_json['email'], credentials_json['password'], tim_url=TIM_URL)
api_client = tim_client.ApiClient(credentials)
api_client.save_json = SAVE_JSON
api_client.json_saving_folder_path = JSON_SAVING_FOLDER
Dataset contains information about request volumes, temperature, holiday, no. of regular customers, marketing campaign, no. of customers for which contract will expire within next 30 or 60 days, no. of invoices sent, flag whether particular timestamp is day when invoices are sent, flag if contact center is open at given timestamp.
Hourly.
Structure of CSV file:
Column name | Description | Type | Availability |
---|---|---|---|
Date | Timestamp | Timestamp column | |
Volumes | No. of requests | Target | t+0 |
Temperature | Temperature in Celsius | Predictor | t+168 |
PublicHolidays | Binary flag for holidays | Predictor | t+168 |
IsOpen | Binary flag to show if contact center is open at given timestamp | Predictor | t+168 |
IsMktingCampaign | Binary flag to show if product team is running marketing campaign at given timestamp | Predictor | t+168 |
ContractsToExpireIn30days | No. of regular contracts that will expire within 30 days | Predictor | t+168 |
ContractsToExpireIn60days | No. of regular contracts that will expire within 60 days | Predictor | t+168 |
RegularCustomers | No. of active contracts for regular customers | Predictor | t+168 |
InvoiceDay | Binary flag to show if invoices are sent at given timestamp | Predictor | t+168 |
InvoicesSent | No. of invoices sent at given timestamp | Predictor | t+168 |
We want to predict volume for the next 7 days, for each hour. Time of prediction is at 23:00 every day. This situation is reflected in values present in CSV file. TIM will simulate this situation throughout the whole out-of-sample interval to calculate accuracy metrics.
CSV file used in experiments can be downloaded here.
This is synthetic dataset generated by simulating outcome of events relevant to operations of contact center.
data = tim_client.load_dataset_from_csv_file('data2B.csv', sep=',')
data
target_column = 'Volumes'
timestamp_column = 'Date'
fig = go.Figure()
fig.add_trace( go.Scatter( x=data.iloc[:]['Date'], y=data.iloc[:][ target_column ] ) )
fig.update_layout( width=1300, height=700, title='Volumes' )
fig.show()