Model building pipeline
This part of the series of orchestration solution templates guides the user through automation of model building in TIM Studio using Azure Data Factory tool. Below is a step by step guide on how to build a pipeline that orchestrates this process.
- in the Creating Data Factory pipeline subsection below we provide a step by step guide for building the orchestration pipeline
- to download and import the pipeline to your Azure Data Factory subscription go to the Download the pipeline subsection
Creating Data Factory pipeline¶
To follow the steps below there are three requirements
- create forecasting workspace
- load dataset into TIM Studio
- create model building definition and build model for this dataset
Steps for creating the pipeline:
-
Login to Azure Data Factory. You should see welcome screen similar to the one on the image below. In the left pane go to the “Author” tab.
-
Create new pipeline by following the image below.
After that, you should see empty pipeline. To simplify the navigation in the screen we split it visually into “Factory Resources pane”, “Activity pane”, “Lower pane”, “Upper pane 1”, “Upper pane 2”, and “Workspace”, see image below.
-
In the “Lower pane”, go to tab “General” and type name of the pipeline - let's say "build_model" pipeline.
-
Then, go to the “Parameters” tab and add five parameters:
- username – TIM Studio username
- password – TIM Studio password
- url – URL of TIM Studio (https://timstudio.tangent.works)
- mbd_id – ID of the model building definition under which we want to build models, set the default to 0
Set the default values similarly as shown in the image below, we will populate them when running the pipeline.
-
Similarly, go to the "Variables" tab and create variable named token.
-
Now, go to the “Activity pane” -> “General” and find “Web”. Drag & drop this activity into the “Workspace”. Select the activity by clicking on it. In the "Lower pane" go to tab "General" and fill name of the activity, e.g. "get_auth_token". This activity will be responsible for the user authentication.
-
By replicating the previous step add two more activities - "Set variable" and "Web". These activities will be responsible for storing the token variable and sending the build model request respectively. Join the activities with green arrows as illustrated on the image below. The arrow indicates the order in which the activities of the pipeline will run and the green color indicates that the activity executes only if execution of the previous activity is successful.
-
Select the first "Web" activity. In the “Lower pane” go to tab “Settings”. Here we have to specify the API request for the user authentication. We use the username and password parameters of the pipeline defined in step 4.
URL: @{pipeline().parameters.url}/api/auth/ Method: POST Body: {"username":"@{pipeline().parameters.username}","password":"@{pipeline().parameters.password}"}
Enter each of the values by clicking on the corresponding input field and selecting "Add dynamic content".
-
Select the "Set variable" activity. In the “Lower pane” go to tab “Settings”. Select the token variable that was defined in step 5 and set the value to:
value: @{activity('get_auth_token').output.token}
It takes the retrieved token from the "Web" activity and stores it in the token variable we created in step 5.
-
Select the second "Web" activity. In the “Lower pane” go to tab “Settings”. Here we have to specify the API request for building the model. Fill the input fields as following:
URL: @{pipeline().parameters.url}/api/prediction/mbds/@{pipeline().parameters.mbd_id}/build/ Method: GET Headers: Authorization: Bearer @{variables('token')}
The url of the request contains ID of the model building definition under which we want to build models.
-
To test the pipeline, click on the “Debug” in the “Upper pane 1”. In the popup, we have to specify the input parameters entering the pipeline. Change the default values of “username”, “password”, and “url” parameters if needed.
The “mbd_id” can be found in the "Model building definition" screen of TIM Studio in the column "ID".
After filling the parameters, click on “Finish”. You should see the output of the debug run in the “Lower pane” -> “Output”. All three activities should end up with status "Succeeded".
The result can be verified in TIM Studio. Go to the "Model building definition" screen, expand the model building definition and you should see the generated model.
-
Now we can run the pipeline and simulate production mode. Navigate to "Upper pane 1" and select "Add trigger" -> "Trigger now". Fill the parameters similarly as when debugging the pipeline in previous step.
In the left pane go to "Monitor" tab to see result of the pipeline run. After a while we should see that the execution is over with status "Succeeded".
-
Go back to the "Author" tab in the left pane and save the pipelines by clicking on the “Publish all” button in the “Upper pane 2”.
Download the pipeline¶
The pipeline can be downloaded here. See Importing pipelines for importing the pipeline to your Azure Data Factory subscription.
Next steps¶
- for creating a trigger that runs the pipeline on schedule, see Triggering pipelines
- to automate the data update, see Data Update pipeline
- to automate the creation of forecasts, see Forecasting pipeline or RTinstantML pipeline