I tried clearml.model.InputModel
and successfully downloaded a model.
Is this an expected way to consume a trained model for inference?
Hi SoggyFrog26 welcome to ClearML!
The way you found definitely works very well, especially since you can use it to change the input model from the UI in case you use the task as a template for orchestrated inference.
Note that you can wrap metadata around the model as well such as labels trained on and network structure, you can also use a model package to ... well package whatever you need with the model. If you want a concrete example I think we need a little more detail here on the framework, usecase etc.
See e.g., the model upload example
https://allegro.ai/clearml/docs/docs/examples/frameworks/pytorch/manual_model_upload.html
GrumpyPenguin23 Hi, thanks for your instruction!
Putting some metadata into the model sounds nice.
I was exactly wondering how to take care of labels and being afraid of handling them as a dataset even when inferring.
Yeah, what I have done is uploaded:
https://github.com/kayhide/PyTorch-YOLOv3/tree/clearml
This is a fork of well-known torch yolo sample and adapted to clearml.