- Problem description
- Data Recommendation Template
- TIM Setup
- Demo example
The so called M-competitions are among the most popular forecasting competitions in the world, bringing together the best forecasters from both industry and academia. Their popularity stems from the proper comparison of many different state of the art algorithms, be it neural networks or classical statistical methods like ARIMA and from the fact, that there are usually thousands of different datasets which prevents contestants to fine-tune their algorithms for a specific case.
These competitions also tend to cause a lot of controversy in the forecasting world because most of the time, the simpler statistical methods outperform the more sophisticated ones, that are hot in the academia. As you can read in this paper here, "After comparing the post-sample accuracy of popular ML methods with that of eight traditional statistical ones, we found that the former are dominated across both accuracy measures used and for all forecasting horizons examined. Moreover, we observed that their computational requirements are considerably greater than those of statistical methods".
TIM is built with deep statistical foundations focusing more on the fundamentals than the algorithm choice. You can learn about mathematics behind TIM here. This template will guide you through the third M-competition M3 with 3003 different datasets and will show you how to use TIM to outperform competing methods in a fully automatic mode.
Data Recommendation Template¶
The dataset can be downloaded here. It consists of many different time series chosen across sampling rates and industries.
The task was to use all the data available but the last n observations and forecast those subsequently. n differs across the sampling rates.
We will run TIM in a fully automatic mode.
No predictors included.
Timestamp is the first column and each value of the timestamp is the beginning of the period it corresponds to i.e. row with timestamp 2011-01 corresponds to the whole period between 2011-01 and 2011-02.
In this example we will simulate an n samples-ahead scenario.
Model building and validation¶
We will use all data but n samples for model building and the rest for model validation.
The best tool to use here is TIM Connector because it is capable to perform many forecasting tasks in parallel. You will get your result in couple of minutes.
1. Download TIM Connector¶
You can find download links in TIM Connector's section.
2. Create folder with dataset¶
version: "1.0" type: Forecasting modelBuilding: data: rows: - from: '1990-01' to: '1994-02' configuration: usage: usageType: Repeating usageTime: - type: Day value: "*" - type: Hour value: "0" - type: Minute value: "0" predictionFrom: baseUnit: Sample offset: 1 predictionTo: baseUnit: Sample offset: 18 forecasting: configuration: predictionScope: type: Ranges ranges: - from: '1994-03' to: '1995-08'
3. Call connector from the command line (terminal)¶
First, change the directory to TIM Connector's builddir with the command:
> cd pathToConnector\builddir.
Then, call the connector with the following command:
> pathToConnector\timconnect.exe path\to\M3.
4. Fill in user credentials¶
Following the previous command, the user will be prompted to fill in their user credentials. Fill in the correct information and click "OK" to continue.
Output in console:
Output in folder: After couple of minutes, every forecast should be ready. By using a script you can now use the json files and forecasts in them to evaluate whatever accuracy metric you wish to compare with methods in the original paper with competition's results. Here is top 10 of the whole competition when symmetric MAPE is used and all the forecasting horizons are averaged. Bear in mind, that TIM required no human supervision nor hyperparameter tuning to achieve this. TIM can also deal with predictors and many different sampling rates which most of these algorithms wouldn't be able to.