Skip to content

Error and warning messages

This is the list of all error and warning messages which TIM can return. The messages are returned in the resultExplanations part of the JSON response.

Example:

"resultExplanations": [
    {
      "index": 1,
      "message": "Invalid request: in [configuration.maxModelComplexity] : not <= 100"
    }
  ]

Apart from these, there are also error messages auto-generated by JSON schema validator (like the one in the example above).

In the following subsections you can see the list of messages. Note that the string ### in the actual message is replaced with dynamic content (e.g. predictor name, timestamp, ...).

Error messages

  1. Predictor name "###" is not unique!
  2. Predictor name is empty string!
  3. Invalid predictor type for predictor "###".
  4. Target is not available!
  5. Multiple target variables!
  6. Multiple public holiday variables!
  7. Invalid values for holiday variable! Only 0 and 1 is allowed.
  8. All predictors must match timestamp spacing of all other predictors. Check the documentation on data properties.
  9. Can not predict with sampling rate measured in months, model uses seconds or vice versa.
  10. Wrong timestamp for predictor "###".
  11. Sampling rate must be positive divisor of seconds per day.
  12. No datablocks for predictor "###".
  13. "Timestamps must begin spacing with 00:00:00. Check the documentation on data properties.
  14. "Unsupported timestamp format for ###.
  15. Timestamp ###1 does not match detected ###2.
  16. Insufficient amount of data. Minimal length of one data block is ### samples.
  17. "There is/are missing values that do not allow your request from being executed.
  18. No timestamps to predict.
  19. Insufficient amount of data for building a model.
  20. No valid anomaly indicator for automatic sensitivity estimation.
  21. Parsing model failed.
  22. No valid dictionary for building AD model.
  1. Invalid Prediction From: baseUnit cannot be shorter than data sampling rate
  2. Invalid Prediction To: baseUnit cannot be shorter than data sampling rate
  3. Invalid Update Until: baseUnit cannot be shorter than data sampling rate
  4. Invalid Usage Time: This parameter is mandatory.
  5. Invalid Usage Time: Cron has invalid symbols.
  6. Invalid Usage Time: Cron has to have exactly 4 symbols.
  7. Invalid Usage Time: Cron "###" can not be parsed to integers.
  8. Invalid Usage Time: Cron symbol is out of range ###.
  9. Invalid Update Time: This parameter is mandatory.
  10. Invalid Update Time: Cron has invalid symbols.
  11. Invalid Update Time: Cron has to have exactly 4 symbols.
  12. Invalid Update Time: Cron "###" can not be parsed to integers.
  13. Invalid Update Time: Cron symbol is out of range ###.
  14. Invalid 'usage', nothing to predict.
  15. Invalid 'usage', Prediction usage must be a subset of Model Building usage.

Other errors

  1. Internal error, please contact support (support@tangent.works).
  2. We can't process your request at the moment. Please contact support@tangent.works with the following UUID: ###

Warning messages

  1. Using weak model for some timestamp, add more data according to the data offsets requirements. This is an important message that you should always pay attention to. Your models were built expecting certain data to be available to evaluate the prediction but now they are not there. If you you get this message using a prebuilt model, you should check whether you use enough past data to make the prediction.
  2. Using weak model for some timestamp, there is a gap in some of the predictors in its most recent records. Try changing the interpolationLength.
  3. Can not use polynomial dictionary alone. Enabling offsets.
  4. Model complexity is influenced by small amount of data.
  5. Predictor ###1 has an outlier value that is ###2% times range (max-min) higher than the in-sample maximum.

image.png

  1. Predictor ###1 has an outlier value that is ###2% times range (max-min) lower than the in-sample minimum.
  2. Predictor ### contains an outlier or a structural change in its most recent records. Sometimes the data used for model building and prediction are faulty but the simple comparison of minimum and maximum values is not enough to detect it (see below). That is why we perform other checks like moving variance check for the last 10 percent of your dataset to make sure that the nature of the data is not significantly different for its most recent records preceding the prediction you would like to make. This will not cause any changes to the model building / prediction process itself. It is only informative.

image.png

  1. Not returning Aggregated Predictions, the response would be too big.
  2. Predictor ### has too many missing values and will be discarded.
  3. Sampling rate for predictors is not equal to the sampling rate stored in the model.
  4. Predictor ###2 has a value missing for timestamp ###1. This warning only tells about the first missing value TIM can find in the dataset, however there might be more.
  5. Your backtesting length is bigger than 90 percent of the training data, models and predictions might be inaccurate or empty.
  6. Try using backtesting length bigger than number of records per day. Backtesting replicates day by day predictions at the same time as your target currently ends. This means that if your data ends at 2 AM, the first backtested sample you can find will be in the previous day right after 2 AM. Make sure that the backtesting length spans at least there to see it.
  7. Your dataset exceeds the memory limit by ###1%. Dropping ###2% of oldest observations. If you wish to keep all observations, try setting memoryPreprocessing to false. If you override this setting, it might make the worker run of the memory and the process will fail. However it does not have to be the case - especially if many of your predictors do not have a high predictive power.
  8. Simplified estimation of the sensitivity was used for the feature ###.