Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
When Dumping Model Via Clearml Serving, What Are The Things That The Clearml Will Look At To Populate The Input_Size And Output_Size? I Tried To Dump An Sklearn Model, And The Input_Size And Output_Size Is Null. I Prefer Not To Update It Separately Using

When dumping model via clearml serving, what are the things that the clearML will look at to populate the input_size and output_size? I tried to dump an SKLearn model, and the input_size and output_size is null. I prefer not to update it separately using the CLI

  
  
Posted one year ago
Votes Newest

Answers 8


As i understand, if im runnning an sklearn experiment locally, i can also save the model artifact by using joblib.dump. How do i set the metadata of the artifact within the sourcecode of the experiment as well or am i meant to add the metadata separately?

  
  
Posted one year ago

No inputs and outputs are ever set automatically 🙂 For e.g. Keras you'll have to specify it using the CLI when making the endpoint, so Triton knows how to optimise as well as set it correctly in your preprocessing so Triton receives the format it expects.

  
  
Posted one year ago

Ahhh, i see. Now i know what i was missing. I thought i could skip the preprocessing part. Does this mean for other engine/framework especially TF/Keras, the serving also sets the input/output based on preprocessing as well?

  
  
Posted one year ago

Here is an example of deploying an sklearn model using ClearML serving.

However, please note that sklearn-like models don't have input and output shapes in the same sense as deep learning models have. Setting the I/O shapes using the CLI is usually meant for GPU-based deep learning models that need to know the sizes for better GPU allocation. In the case of sklearn on CPU, all you have to do is set up your preprocessing.py script such that the data coming in is what the model.predict() expects.

  
  
Posted one year ago

alright. Can you at least point me to an example of setting the input-size and output-size (via the clearml-serving cli)? Can't find it in the main doc

  
  
Posted one year ago

Unfortunately no, ClearML serving does not infer input or output shapes from the saved models as of today. Maybe you could open an issue on the github of ClearML serving to request it? Preferably with a clear, minimal example, that would be awesome! We'd take it into account for next releases

  
  
Posted one year ago

yeah, cause i thought that it would be able to figure that out from the model file, and im just missing some code/configuration

  
  
Posted one year ago

Just to be sure I understand you correctly: you're saving/dumping an sklearn model in the clearml experiment manager, then want to serve it using clearml serving, but you do not wish to specify the model input and ouput shapes in the CLI?

  
  
Posted one year ago
1K Views
8 Answers
one year ago
one year ago
Tags