Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
For Those Using Clearml For Model Storage - Do You Use It Just For Storing Checkpoints During Training, Or Do You Also Use It As A Canonical Storage Location For Fully Trained Models? Like For Services Using These Models That Are Deployed To Production, D

For those using ClearML for model storage - do you use it just for storing checkpoints during training, or do you also use it as a canonical storage location for fully trained models? Like for services using these models that are deployed to production, do your services use ClearML's storage to access those models?

As a separate Q, for those using pytorch, is it possible to serialize the models using clearML plugins and then storing those in the same storage clearML uses for training artifacts?

  
  
Posted 3 years ago
Votes Newest

Answers 5


For my own clarification, if I wanted to write a plugin that would listen for events to note when a model is set to is_ready and is a pytorch model, it runs some code to attempt to serialize it and then stores the new, serialized model in the model repository, would that be a Model Registry Store plugin?

  
  
Posted 3 years ago

Really stoked to start using it and introduce a more sane ML ops workflow at my workplace lol.

Totally with you 🙂

... would that be a 

Model Registry Store

 plugin?

YES please ❤
So we actually just introduced "Applications" into the clearml free tier, https://app.community.clear.ml/applications
Allowing you to take any Task in the system and make it an "application" (a python script running on one of the service agents), with the ability to configure it with a wizard (wizard definition is a json defining the diff steps, and mapping into the Task parameters/configuration), and allow you to add reports as well (with the same Task logger interface).
This will allow you to write a Task that listens (polls) on the model repository state, then when detecting a new model, it could launch (enqueue) another Task.
wdyt? is this what you had in mind ?

  
  
Posted 3 years ago

Yeah! I just wanted to make sure that it made sense to tag the models for production use and then have them loaded right out of the model repository and into the production service. As I've looked around at the API it definitely seems to support that use case. Really stoked to start using it and introduce a more sane ML ops workflow at my workplace lol.

  
  
Posted 3 years ago

Hi ShallowArcticwolf27
First of all:

If the answer to number 2 is no, I'd loveee to write a plugin.

Always appreciated ❤

Now actually answering the Q:
Any torch.save (or any other framework save) will either register or automatically upload, the file (or folder) in the system. If this is a folder it will be zipped and uploaded, if a file just uploaded to to the assigned storage output (the cleaml-server, any object storage service, or shared folder). I'm not actually sure I followed the seconds Q, but I'll first try to give some background. All models are stored in the model repository (a per project repository), then you can access / query the model repository and fetch (download) your model. Actually using the models (i.e. loading) is up to the user.Make sense ?

  
  
Posted 3 years ago

If the answer to number 2 is no, I'd loveee to write a plugin.

  
  
Posted 3 years ago