Skip to main content

FAQ

Is TIM a model? If so, what kind of model?

No, TIM is a model building strategy. TIM generates unique linear-in-parameters models for data of interest. TIM optimizes both the model’s structure and the model’s parameters at the same time. Consequently, models for two different datasets may be substantially different.

Is TIM a machine learning library?

No, TIM is an end-to-end automated model building, forecasting and anomaly detection solution for time series. TIM automates feature engineering, model building and model selection adapted to the desired prediction horizon and data availability.

Is TIM a pure ML or do you use statistical models as well?

TIM is a single modelling strategy, not a collection of different models. It does not pose any statistical assumptions on the data, but there are several parts of the algorithm that stem from both the classical statistical models and ML models.

How do you explain the ML methods to the practitioners? I found it difficult to even explain ARIMA, ML is typically even more complicated.

The best part about TIM is that even though the core of the algorithm might be complicated, the resulting model is a simple linear regression that uses original predictors and their transformations. Linear model is as simple as one can get when modelling. The transformations are usually very intuitive as well (for example periodic waves that have daily frequency or an indicator whether it was Friday two hours ago).

How does TIM assess and present forecast uncertainty?

TIM forecast comes together with prediction intervals. You might know these under the name “confidence intervals”, but since TIM does not pose any statistical assumptions on the data and distributions of errors, the term “prediction intervals” is mathematically correct here.

I would be very interested in finding out how decisions are made to re-estimate models using "recent data". How "recent" and "relevant" data are being determined for each model? Is there an interaction between this decision and the type of forecasting/ ML model being estimated?

TIM is very sensitive to new data points. You don’t have to be afraid – the model as a whole is robust – meaning that by changing an observation slightly the forecast produced by this model won’t change much either. But the structure of the model itself is loose and might change even with one new observation. If the model A uses temperature(t-12) and rain(t-2), retraining it with one more observation might change the structure to use temperature(t-11) and rain(t-1) instead.

Regarding the relevant importance plot of factors, how is it exactly computed? Are those model parameters? Or is any other interpretability algorithm used?

There is no additional algorithm used. The importances are assembled using partial variance that the predictor explains from the target when included in the model.

Who are your main customers? Or rather what industries typically employ your software? Do these companies normally have analytical forecasting methods / systems in place or do you normally replace human planners?

TIM is industry agnostic, we have rollouts in energy, retail, manufacturing, etc.. TIM usually does not replace human planners – it enhances their capabilities and scale at which they can operate. Either the company has a model implemented already and is looking for a solution that would provide higher accuracy, speed and flexibility or it has data scientists that have lots of various tasks and TIM serves to make their life easier and their work more scalable and transferable across the company.

Is TIM benchmarked against practice ML methods?

TIM is developed in a way where we gather tons of datasets across different industries and constantly benchmark the algorithm against the latest time series solutions or algorithms.

Do you compare the forecasts of your algorithm to a naĂŻve benchmark?

No, not this is not an explicit part of the model building process. TIM relies on information criteria to make sure it does not overfit the data. If it happens so anyway, user can handily control the model complexity. TIM typically outperforms naĂŻve benchmarks. The more complex the dataset is, the bigger the difference.

When a data input changes, obviously some notification to the end user needs to take place and then a decision of whether the change is temporary or permanent needs to be made before a decision to alter the model is made. How does your solution help this situation, other than just the speed to recalculate a model?

TIM returns warnings about suspicious data input changes compared to the behavior that could be observed historically. In addition, as a part of our ML Ops functionalities, the accuracy drifts are also monitored and user is alerted if there is a drift in accuracy.

How much data do you need for your methods to perform well?

This question is difficult to answer. With time series it is more often the case of providing too much data that causes problems, not the other way around – because the dynamics change a lot. Also, TIM works with what it gets. If there are few datapoints, TIM will not build complex models. If you start providing more datapoints, the models produced by TIM might change as well. Retraining with TIM is not something that one should see as an obstacle, but rather as an advantage to other time series algorithms.

How are "structural changes" detected?

There are several checks implemented based on the changes in the moving variance and mean of the signal.