Skip to content

Experiments

Experiments.png

An experiment is the working point in the TIM platform, where users will find the core of the analytics. Each experiment is focused on a single type of analytics: either forecasting or anomaly detection.

The experiments overview

As experiments can be found a single level deep inside use cases, the experiments overview page is the same page as the use case detail page. More information on the experiments overview page can be found in the section on the use case in detail.

The experiment in detail

In an experiment's detail page (jobs overview page), all of the information regarding the experiment can be found. That includes the name, description and type (forecasting or anomaly detection), as well as the dataset and dataset version it revolves around and information on all of the iterations (jobs) it contains.

ExperimentForecasting.png

ExperimentForecasting.png

The Iterations table provides an overview of all the jobs that are contained in the experiment. For each job (each row in the table), important configuration settings (in-sample and out-of-sample ranges, the forecasting horizon for forecasting jobs, and the job type for anomaly detection jobs) are displayed, as well as the job's status. If the job has been executed successfully (status Finished or Finished With Warning) its aggregated MAPE (Mean Average Percentage Error) is shown too. Clicking a job that has been executed successfully opens up that job's results.

IterationsTableF.png

IterationsTableAD.png

That is not all; more on the specifics of this page follows below.

From this page, a user can edit or delete the experiment. From the iterations table, a user can delete a job/an iteration.

  • Editing: Editing an experiment allows the user to update its name and description.

  • Deleting: Be careful with deleting an experiment: deleting an experiment will also permanently delete any data contained in this experiment, including the ML jobs.

  • Deleting a job/an iteration: Deleting a job/an iteration will delete its

Opening an experiment: a clean slate

Within an experiment, users get to explore the data, again, and can browse through configuration options. An overview of the configuration options can be found in the dedicated sections on the configuration for forecasting and the configuration for anomaly detection.

Once the desired settings are selected, an ML request can be triggered and TIM Studio will show how the calculation progresses. This request involves the creation of a new job, which will be added to the Iterations table.

The results of a job

After the model has been built and applied (or when browsing to a job that has already been executed), it’s time to examine the results. Users can get insights into the models that were used and review the performance.

The results in the line chart(s)

The line chart will be updated with the results of the job.

For forecasting, the production forecast is one of those results. The production forecast is the actual forecast the user requested, and is accompanied by prediction intervals. For each bin (more information on bins in this section on forecasting outputs) the in-sample and out-of-sample backtesting results are shown.

ForecastingLineChart.png

For anomaly detection, the anomalies are part of those results. The anomalies are linked to the anomaly indicators in the Anomaly indicators line chart: a user can select the perspectives they would like to see in that chart, and both the indicators and the anomalies will update accordingly. The results also include the normal behavior: how the target would have been expected to behave, had it not had any anomalies. These results are applicable for both model building and detection jobs.

ADLineChart.png

AnomalyIndicators.png

The accuracy metrics

The accuracy metrics cards show the accuracy metrics (error measures) for both in-sample and out-of-sample backtesting, where applicable. Metrics include the MAE (Mean Absolute Error), RMSE (Root Mean Squared Error) and MAPE (Mean Average Percentage Error). The metrics are shown for each of the bins.

inSampleMeasures.png

outOfSampleMeasures.png

The models

By default, the predictor importances are shown, visualized in a treemap. This visualization represents the extent to which each predictor (each input variable) contribute to the models and thus the forecast or detection. The color scheme of the treemap matches the colors of the predictors in the line chart; hovering over the treemap will give the user more information about the relevant predictors and their importance.

TreemapPredictors.png

Flipping the switch above the treemap changes to that of the feature importances. This visualization represents the extent to which each feature (generated transformation of input variables and other data) contribute to the model and thus the forecast or detection. If applicable, a slider bar will appear allowing users to browse through the different models used in calculating the requested forecast or detection. Again, the color scheme of the treemap matches the colors of the predictors in the line chart; this time a single feature may have multiple colors, meaning it represents an interaction between multiple (input or artificial) variables. Again, hovering over one of the treemaps will give the user more information about the relevant features and their importance.

Treemap.png

The job's configuration

Users can adjust the configuration where needed by iterating over the job. Therefore, it is relevant to be able to review a particular job's configuration if it has already been successfully executed. The same configuration cards that are used to set a job's configuration when creating a job can be used to review an existing job's configuration. This time the settings will be disabled from changes, to avoid confusion as to what a user is looking at (the configuration used for an existing job versus the configuration to be used for a new job).

Forecasting Configuration

FConfiguration.png

Anomaly Detection Configuration

ADConfiguration.png

The process of iterating

After the results of a job have been examined, a user may want to continue experimenting and iterate over that job to find ways to improve the results. The Iterate button allows the user to do just that. When clicked, it will bring the user back to the clean slate state of the experiment, with one exception: the configuration will remain as it was in the previously looked at job. Therefore, the user does not have to restart from the default configuration when setting the configuration; they can adjust the previous configuration as desired.