Datasets
Users can upload, perview, manage and explore datasets in TIM Studio. Users will be able to leverage TIMβs time-series database, and TIM Studio will give users an overview of metadata and statistics that might be relevant in the data exploration and preparation phase.
The repository also includes version management, so you can keep track of your version history and update your datasets as new data becomes available.
The datasets overviewβ
In the datasets overview, a user will find a list of all the datasets they have access to. Relevant metadata - such as the workspace a dataset belongs to, the number of observations in the latest version and the variables it contains, the (estimated) sampling period, the last timestamp of the latest version and when it was last updated - can be seen directly from this overview.
In this page, new datasets can be uploaded, and existing datasets can be downloaded, updated, edited and deleted. Datasets can also be downloaded from this page.
Uploading: Uploading a new dataset allows the user to select the data source set the dataset's name and description. During the uploading process, users will get to see a preview of the dataset, allowing them to visually check whether they selected the right dataset and set the correct properties.
- CSV: If the user opts to upload a CSV file, they will be able to set its properties: the column separator, the decimal separator, the timestamp format and the timestamp column). TIM Studio aides the user by recognizing and suggesting a range of timestamp formats, CSV separators and decimal delimitors automatically.
- SQL: If the user opts to connect to an SQL table, they will be able to set the connection properties: the database name, the database type (supported types include PostgreSQL, MySQL, MariaDB and SQL_Server), the host, the user name, the password, the port and the table name.
Downloading: Downloading a dataset allows a user to extract a CSV file containing this dataset, to upload the same dataset in another Workspace, or to be used for any purposes that do not involve TIM Studio.
Updating: Updating a dataset allows the user to upload a new version of an existing dataset, adding new observations or overwriting existing observations. During the updating process, users will get to see a preview of the dataset, allowing them to visually check whether they selected the right dataset and set the correct properties. When updating by uploading a CSV file, TIM Studio will again aide the user by recognizing and suggesting a range of timestamp formats, CSV separators and decimal delimitors automatically.
Editing: Editing a dataset allows the user to update its name and its description.
Deleting: Deleting a dataset will also permanently delete all of its versions.
Viewing dataset logs: A dataset's logs contain information about TIM's actions on that database (during uploading and updating) as well as any constatations TIM makes (informative as well as warnings), and any errors that may have occured.
The dataset in detailβ
Detail view modeβ
In a dataset's detail page, all of the information regarding the dataset can be found. This includes the name, the description, when it was created and when it was last updated, the estimated sampling period, the number of observations and variables it contains and how many (and which) versions of this dataset exist. The page also contains a detailed graph and table, enabling users to explore the data itself in detail. Users can find an overview of the dataset's statistics, including relevant information for each of the variables (minimum, maximum and average values, as well as the amount of missing observations). To quickly navigate to where the dataset is used in the structure of the TIM repository, the page also shows the use cases that center around it.
The version pills in the title card give an overview of the available versions of the dataset and their status. These pills are also clickable (when the status is finished or finished with warning), and clicking them will update the page with the data and statistics of the selected version.
From this page, a user can easily browse a specific use case of interest. They can also browse through the different versions of the dataset. Additionally, a user can update the dataset (i.e. upload a new version), as well as edit and delete the dataset. To start building models on this dataset, the user can also link an existing use case to this dataset and add a new use case linked to this dataset.
Updating: Updating a dataset allows the user to upload a new version of an existing dataset, adding new observations or overwriting existing observations. During the updating process, users will get to see a preview of the dataset, allowing them to visually check whether they selected the right dataset and set the correct properties. When updating by uploading a CSV file, TIM Studio will again aide the user by recognizing and suggesting a range of timestamp formats, CSV separators and decimal delimitors automatically.
Editing: Editing a dataset allows the user to update its name and its description.
Deleting: Deleting a dataset will also permanently delete all of its versions.
Inspecting variables' availabilities: See the section on the availability component below.
Scaling and aggregating the data: See the section on the timescale component below.
Linking to a use case: Linking a dataset to an existing use case is possible if the relevant workspace contains at least one use case that does not have a linked dataset. By creating this link, the user can start creating experiments and executing jobs in the linked use case.
Adding a linked use case: Adding a use case allows the user to create a new use case that already contains a linked dataset, namely the one that the process was initiated from. This empowers the user to start creating experiments and executing jobs in this newly created use case right away.
Viewing dataset logs: A dataset's logs contain information about TIM's actions on that database (during uploading and updating) as well as any constatations TIM makes (informative as well as warnings), and any errors that may have occured.
Quick forecasting: Quick forecasting allows a user to quickly start forecasting with the current dataset; TIM Studio takes care of all actions leading up to it (creating a use case, linking the dataset to it, creating an experiment, opening the experiment...).
Data availabilityβ
The card titled Target & Availability displays the availability of all variables in the dataset relative to that of the target. To view the overview of availabilities that's relevant to a specific use case, the user can select the desired version of the dataset and the intended target variable in this card. Following this, TIM Studio automatically determines an appropriate scale for looking at the availabilities (in the example image below, the scale is set to Days even though the dataset is sampled hourly); it is however possible to manually adjust this to a user's specific needs.
Below this scale, each variable present in the dataset is displayed together with a time axis indicating relative availabilities. The availability of the target variable (always 0) is indicated by the blue vertical mark. This way each variable's relative availability can easily be read: for example, the variable called Windspeed is available until one day before the end of the target variable, while the variable called Hum_p is available for two days after the end of the target variable. The exact relative availability of each variable is also displayed: this way it's easy to check that Windspeed's availability is indeed exactly 24 hours less than that of the target, and Holiday, which seemingly goes on into the future based on the time axis, is available for 264 samples or hours (11 days) past the end of the target variable Cnt.
Data transformationβ
In the header section of the line chart card, a collapsed button can be found labeled Timescale. This button provides a summary of the sampling period (in the example below, 1 hour) and aggregation (in the example below, mean) of the data.
TIM makes it possible to adjust these settings, and scale and aggregate the data to the specific needs of the challenge a user is focusing on. A potential use case here can be found in sales data, where sales may be measured hourly, but a daily forecast is desired; scaling the data to a daily frequency while aggregation by summation before starting the forecast would achieve this.
Timescaling can happen on set amounts of base units, with available base units being day, hour, minute and second. Aggregation is available by mean, sum, minimum and maximum. The aggregation that is set relates to the target variable. By default, numerical variables are aggregated by mean and boolean variables are aggregated by maximum.
Apart from timescale and aggregation, this menu also allows setting the imputation: i.e. are missing observations filled in and if so, how. It is possible to set the imputation type to Linear, LOCF (last observation carried forward) and None, the length can be set by specifying the maximal number of successive samples that should be imputed (this is not applicable for type None). By default, a linear imputation with a maximal length of 6 samples is applied.
Operations view modeβ
Using the view mode menu allows a user to switch view modes, ex.g. to navigate to the Operations view mode. Doing so brings up an overview of the history of this dataset, in other words all operations that happened related to the dataset.
These operations include any dataset uploads and updates, as well as any jobs created with this dataset. Each operation is visualized in its own component. If the related entity (dataset version or job) has been deleted since, the operation has an opacity to reflect this. If the related entity still exists, the arrow at the bottom right of the operation allows navigating to it. The icon at the left of the operation shows the operation subtype (dataset; job: model building, model application, RCA) and allows to expand and collapse the operation to see more (or less) of its metadata. This metadata includes the origin indicating the interface from which the operation was performed and the datetime of the operation. At the top right of each operation, the operation's action type (dataset: create, update, delete; job: build, rebuild, predict, detect, RCA) is displayed.
The operations card also contains a set of action buttons that can be used to manipulate the way in which the operations are shown. The sections below explain the possibilities of each of these actions.
Sortingβ
The sorting menu enables setting the order in which the operations are shown. By default, the most recent operations are shown first (i.e. reverse chronological order). It is possible to revert this and show the olders operations first by clicking the arrow button next to "Chronological".
Filtersβ
The filter menu enables defining which operations to include in the list and which operations to exclude from the list. By default, all operations are included. The "Forecasting" and "Anomaly Detection" options filter forecast and anomaly detection jobs, respectively; the "Existing" and "Deleted" options filter on the state of the related entity (dataset version or job).
Hover relationsβ
The hover relations menu enables to easily visualize relations between operations that are of interests. The "Parents" options allows for showing an operation's parents (the parent (dataset)/used (job) dataset version and, if applicable (job), the parent job), while the "Children" operation allows for showing an operation's children (the descendant dataset version (dataset) or jobs (job)).
When one or both of these options are selected, hovering an operation will bring forward that operation's relation(s) of choice, as shown in the image below.