Thank you MuddyCrab47 !
Regrading model versioning:
All models are logged automatically by trains (no need so specify it, as long as you are using one of the automagically connected frameworks: PyTorch/keras/TF/SKlearn)
You can see see how it looks like on the demoapp:
https://demoapp.trains.allegro.ai/projects/5371015f43f043b1b4ad7203c1ff4a95/models
Regrading Dataset management, we have a simple workflow demonstrated below, bascially using artifacts as dataset storage, with very easy interface for retrieving them (including cache),
The actual Dataset ID is the experiment uploaded/created it.
See here:
https://github.com/allegroai/events/blob/master/odsc20-east/generic/dataset_artifact.py
https://github.com/allegroai/events/blob/master/odsc20-east/generic/process_dataset.py
https://github.com/allegroai/events/blob/master/odsc20-east/scikit-learn/sklearn_jupyter.ipynb
A more robust dataset management is available on the enterprise edition (including searchability, debiasing etc.)
is the model overridden or its version is automatically increased?
You will have another model, with the same name (assuming the second Task has the same name), but a new ID. So if I understand you correctly, we have auto-versioning :)
understood trains does not have auto versioning
What do you mean auto versioning ?
task name is not unique, task ID is unique, you can have multiple tasks with the same name and you can edit the name post execution
Let's assume I've created a model with a task. When I execute a second task with same model name, is the model overridden or its version is automatically increased?
Thank you for your quick reply. AgitatedDove14
I've looked at the link you've provided. As I understood trains does not have auto versioning, it just infers the version from the task name right?