An important setting for anomaly detection is the sensitivity to anomalies. What exactly can be considered anomalous behavior is often not unambiguously defined; instead, anomaly detection looks for the extent to which a given observation can be considered anomalous. Sensitivity comes into play to support the actual decision stating which observations are anomalous.

In anomaly detection, all observations are characterized by an anomaly indicator, indicating to what extent each observation can be considered anomalous. The sensitivity parameter defines the decision boundary, allowing the anomaly detector to decide for each observation whether it is anomalous or not.

Sensitivity is defined as the percentage of the (training) data that is expected to be anomalous. It indicates how often users would like to have been alerted for anomalies in the training data. For example, a sensitivity of 3% considers the 3% most anomalous observations in the training data to be anomalies; the user would have been alerted in these 3% of the training data.

It is recommended to choose training data with as few anomalies as possible with the sufficient length of data allowing TIM to find the best possible model. In case you have labelled data, this can be accomplished by omitting anomalous observations from the training data and setting the sensitivity to 0%. In this case the normal behavior model is not affected by anomalous points at all. Of course, most often you do not know which points are anomalous, but the percentage of anomalies should not exceed 5 %, otherwise, supervised learning would be a more appropriate approach than anomaly detection.

For better understanding the link between your data, sensitivity, anomaly indicator and threshold, we provide you with some simple examples.

Let's say we have data and corresponding anomaly indicator like in the example below:


Data has 100 points whereby 7 points of them are anomalies - one global outlier, 5 points representing collective anomaly and contextual one. It is vital to find the right sensitivity; otherwise, you will have too much, or too little points reported anomalous (false positives / false negatives).

Sensitivity parameter determines decision boundary/threshold for a given anomaly indicator. The higher the sensitivity, the more points are declared as anomalies. In this graph, you can see different settings of sensitivity. Sensitivity 0 % ,1 %, 6% and 7% claims zero, one, six and seven points as unusual in this order. As we have in this case 7 abnormalities out of 100 points, the correct sensitivity is 7%.

Automatic sensitivity estimation

Since API 4.2

We have automatized the estimation of the sensitivity parameter and it is the default setting.

Two major reasons for automatizing the estimation of sensitivity are:

  • increase large scale potential
  • in most of the cases, the number of anomalies is unknown

As we already know, sensitivity is a very important parameter in the anomaly detection process, that has to be adjusted correctly as it affects how many anomalies are found. Finding the right sensitivity can be very tedious work in unlabeled data(unsupervised AD), which is often the case. It is obvious then, that if there was not the possibility to automatically set this parameter, it would not be very realistic to serve many time series in an automatic way.

As automatic estimation is calculated based on anomaly indicator it is the most appropriate value from the perspective of anomaly indicator values.

Of course, in case you would like to be more conservative ( do not want to be alerted so often, only in most anomalous cases; false positives cost you significantly more than false negatives), or in contrary, your domain requires to be alerted more often (false negatives cost you significantly more than false positives) you can manually set the sensitivity to lower/ higher percentage.