Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi, I'Ve Been Trying To Get Familiar With Clearml And To That End I'Ve Been Working On Getting The Catboost Example (

Hi,

I've been trying to get familiar with clearml and to that end I've been working on getting the catboost example ( clearml/examples/frameworks/catboost/catboost_example.py ) working on clearml serving. There isn't a serving engine option for catboost, so I've added a preprocess .py file with a load function that loads the model with catboost. That seems to require adding catboost to CLEARML_EXTRA_PYTHON_PACKAGES for the clearml-serving-inference container.

Is this the right track for serving this example, and if so does the inference container need to have all the dependencies for the endpoints it serves together or is there a way to have separate environments referencing the requirements from the training task?

  
  
Posted one month ago
Votes Newest

Answers 5


Sorry IntriguedGoldfish14 just noticed your reply
Yes two inference container, running simultaneously on the cluster. As you said, each one with its own environment (assuming here that the requirements of the models collide)
Make sense

  
  
Posted one month ago

I see, but to actually serve both models/sessions at the same time, it would require two inference containers, as each inference container can only serve one session at a time?

  
  
Posted one month ago

Hi IntriguedGoldfish14
Yes the way to do that is just use the custom engine example as you did, also correct on the env var to add catboost to the container
You can of course create your own custom container from the base one and pre install any required package, to speedup the container spin time
One of the design decisions was to support multiple models from a single container, that means that there needs to be one environment for all of them, the main issue is if some packages collide, but I think this is relatively rare, is this an issue for you?

  
  
Posted one month ago

Yes that is an issue for me, even if we could centralize an environment today, it leaves a concern whenever we add a model that possible package changes are going to cause issues with older models.

yeah changing the environment on the fly is tricky, it basically means spinning an internal http service per model...

Notice you can have many clearml-serving-sessions, they are not limited, so this means you can always spin new serving with new environments. The limitation is changing an environment on the fly

Would the recommendation be to spin up multiple inference containers?

kind of, yes spin multiple clearml-serving-session, essentially each session has it's own environment, and in that environment you can add / remove models on the fly

  
  
Posted one month ago

Hi,
Yes that is an issue for me, even if we could centralize an environment today, it leaves a concern whenever we add a model that possible package changes are going to cause issues with older models. Also it would be nice to have a more direct link between the saved model objects and their serving environment.

Would the recommendation be to spin up multiple inference containers? Also is there a built in way to separate the preprocessing and model inference into separate containers? Part of the package issue is on the preprocessing side.

  
  
Posted one month ago
224 Views
5 Answers
one month ago
one month ago
Tags