Skip to content


DevOps discipline transformed the way how applications are developed (dev), deployed and run (ops). With arrival of ML, it is not an application that is being deployed and used, it is a model as a product of ML phase. Nevertheless, to make it operational, in Dev part, it is still needed to build required functionality thus enable use of models. So we arrive to MLDevOps acronym.


MLDevOps in TIM Studio

TIM Studio is built on top of TIM Engine. This means that you can rely on automation of the most parts while working with the user friendly interface to manage things, get insights and be way more productive.

The ML

Model development and building

The model development and building process is fully automated. TIM Engine derives the best possible features from your data and it spares you from hyper parameter optimization phase as well. Nevertheless, you are always allowed to adjust settings to TIM Engine if you would like to alter certain areas.

In fact, it is not only one model that is built. To achieve the best accuracy possible, TIM builds whole range of models - so called Model Zoo.


Another part that you can influence in ML phase is data itself. For instance, during the exploration of your Dataset, you may spot suspicious values, or too many missing values etc. then fix and upload correct values, or completely disregard column with faulty data for model building.

Experiment Workbench

The place where you can adjust settings in both areas is Experiment Workbench:

  • settings to TIM Engine,
  • influence which data (columns) are used for model building.

You are given insights into models built, features, accuracy metrics etc. for each iteration of experiment (back-test).

When finished, you can decide which mixture of settings will work best in production (ops) by activating particular iteration as Production setup.

The Dev

The development of code that would enable using models in production is not required. Traditionally, you'd need to take efforts to develop, build, test, release and deploy application to run models, e.g. on container with dedicated API end-point. All of that is not required. By creation of Production setup you can start using it in production as end-point is already prepared.

The Ops

Release and Deployment

Operationalization of models (Model Zoo) with TIM is simple. You just need to activate particular iteration as Production setup and use it right away. Even more, there are more options how to use it. Either re-use pre-built Model Zoo (InstantML), or build new on the spot from the current data in a fraction of time (RTInstantML).

To start forecasting, just navigate to Production tab of respective Use Case and create your first forecast. All bits and pieces are already "deployed". If you'd like to automate the forecasting process, call the API end-point with respective parameters from platform of your choice - this feature is planned on our roadmap.

Operate, monitor, scale

When back-testing model during dev phase, it can be working well however, when put to production to regularly forecast, over the time, parameters of data can be changed, or in the worst case, its quality can be influenced by technical errors in the process of data capture/preparation. Consequently, accuracy of predictions would drop.

TIM is addressing this by providing valuable feedback and warnings, or even better, giving you the ability to merge model building and forecasting into single operation with RTInstantML.

TIM's architecture supports auto-scaling by default, so you do not need to worry about adding in more resources.

For each calculated forecast you are given Warnings should there occur unexpected situation in data (for example when relying on prebuilt models). Accuracy drift monitoring is another feature that will bring peace to your operations - this feature is planned on our roadmap.