Skip to main content

Error and Warning Messages

This is the list of all error and warning messages which TIM can return. Apart from these, there are also error messages auto-generated by JSON schema validator (like the one in the example above). Note that the string ### in the actual message is replaced with dynamic content (e.g. predictor name, timestamp, ...).

Model building and forecasting error messages

  1. Target variable is different in the data than in the model
  2. Model uses a different holiday variable than the data.
  3. Model uses a different sampling period than the dataset.
  4. No training data provided.
  5. PredictionTo or predictionFrom time unit is shorter than the sampling rate.
  6. PredictionTo or predictionFrom time unit of data sampled in months can only be given in samples (S).
  7. PredictionTo or predictionFrom time unit can not be given in months if data sampled differently than in months.
  8. You have chosen to forecast no timestamps. Check your predictionTo and predictionFrom.

Job execution error messages

  1. Tabular result could not be saved to database.
  2. Model could not be saved to database.
  3. Root cause analysis could not be saved to database.
  4. Error measures could not be saved to database.
  5. Worker version could not be saved to database.
  6. Job info could not be saved to database.
  7. Getting job parameters for job ### failed.
  8. Getting model for job ### failed.
  9. Getting job ### failed.
  10. Getting job data failed.
  11. Getting dataset id failed.
  12. Getting latest version of dataset ### failed.
  13. Getting metadata of dataset version ### of dataset ### failed.
  14. Getting slice from dataset version ### of dataset ### failed.
  15. Getting license metadata failed.
  16. Parsing model failed.
  17. Verifying model signature failed. The model was modified.
  18. Internal error, please contact support (
  19. We can't process your request at the moment. Please contact

Model building and forecasting warning messages

  1. Could not evaluate some production forecast. There might be a gap in some of the predictors in its most recent records. Try changing the imputationLength.
  2. Predictor ###1 has an outlier value that is ###2% times range (max-min) higher than the in-sample maximum.
  3. Predictor ###1 has an outlier value that is ###2% times range (max-min) lower than the in-sample minimum. Outlier1.png
  4. Predictor ### contains an outlier or a structural change in its most recent records. Sometimes the data used for model building and prediction are faulty but the simple comparison of minimum and maximum values is not enough to detect it (see below). That is why we perform other checks like moving variance check for the last 10 percent of your dataset to make sure that the nature of the data is not significantly different for its most recent records preceding the prediction you would like to make. This will not cause any changes to the model building / prediction process itself. It is only informative.Outlier2.png
  5. Not returning in-sample forecasts, the response would be too big.
  6. Predictor ### has too many missing values and will be discarded.
  7. Your backtesting length is bigger than 90 percent of the training data. Models and predictions might be inaccurate or empty.
  8. Your dataset exceeds the memory limit by ###1%. Dropping ###2% of oldest observations. If you wish to keep all observations, try setting memoryPreprocessing to false. If you override this setting, it might make the worker run of the memory and the process will fail. However, it does not have to be the case - especially if many of your predictors do not have a high predictive power.
  9. Your rolling window is not divisible by 1 day, but your dataset and Model Zoo have daily cycle. Some backtest forecasts might not be evaluated.
  10. Target not provided, no new model will be trained.