@<1523704157695905792:profile|VivaciousBadger56> , ClearML's model repository is exactly for that purpose. You can basically use InputModel and OutputModel for handling models in relation to tasks
@<1523701070390366208:profile|CostlyOstrich36> : After more playing around, it seems that ClearML Server does not store the models or artifacts itself. These are stored somewhere else (e.g., AWS S3-bucket) or on my local machine and ClearML Server is only storing configuration parameters and previews (e.g., when the artifact is a pandas dataframe). Is that right? Is there a way to save the models completely on the ClearML server?
Also, I could not find any larger examples on github about Model, InputModel, or OutputModel. It's kind of difficult to build a PoC this way... 😅
@<1523701070390366208:profile|CostlyOstrich36> : Thanks, where can I find more information on ClearML's model repository. I hardly find any in the documentation.
Also, that leaves the question open, what Model
is for. I described how I understand, the workflow should look like, but my question remains open...
@<1523701070390366208:profile|CostlyOstrich36> , I am build a PoC, evaluating if we should use ClearML for our entire ML team and go Scale or Enterprise pricing. For that I need to know all/most capabilities and concepts of ClearML to see if ClearML is future-proof.
TL;DR: difficult to narrow it down, but we (amongst other things), we need a model store
I think that Model is used to do general actions as allowed by the SDK. InputModel is for an easier interface when working with the Task object directly.
What is your use case?
@<1523701070390366208:profile|CostlyOstrich36> I read those, but did not understand.
As far as I understand, the workflow is like this. I define some model. Then I register it as an OutputModel. Then I train it. During training I save snapshots (not idea how, though) and then I save the final model when training is finished. This way the Model is a) connected to the task and b) available in the model store of ClearML.
Later, in a different task, I can load an already trained model with InputModel. This InputModel is read-only (regarding the ClearML model store), but I can make a copy on which I work and - e.g. - further train. So, what I would think, I get the model with InputModel from the model store, make a copy, register the copy with my new task as an OutputModel and then go on as I did in the last paragraph.
No idea how the Model comes into play here. Let me compare InputModel with Model:
class Model()
Represent an existing model in the system, search by model id. The Model will be read-only and can be used to pre initialize a network
class InputModel()
Load an existing model in the system, search by model ID. The Model will be read-only and can be used to pre initialize a network. We can connect the model to a task as input model, then when running remotely override it with the UI.
Load a model from the Model artifactory, based on model_id (uuid) or a model name/projects/tags combination.
Sounds just the same to me. What is the difference, @<1523701070390366208:profile|CostlyOstrich36> ?