Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
I'M Trying To Understand How Clearml Serving Works And Trying To Set It Up. I Have An Agent Listening To The Serving Queue And I'M Trying To Set Up Clearml Serving To Launch On The Serving Queue. Do I Need To Have Clearml-Serving Installed On The Machine

I'm trying to understand how clearml serving works and trying to set it up. I have an agent listening to the serving queue and I'm trying to set up clearml serving to launch on the serving queue. Do I need to have clearml-serving installed on the machine on which I'm running the agent? Also I'm getting this error.

  
  
Posted 2 years ago
Votes Newest

Answers 5


can you tell me what the serving example is in terms of the explanation above and what the triton serving engine is,

Great idea!

This line actually creates the control Task (2)
clearml-serving triton --project "serving" --name "serving example"
This line configures the control Task (the idea is that you can do that even when the control Task is already running, but in this case it is still in draft mode).
Notice the actual model serving configuration is already stored on the creating Task/Model. Otherwise you have to explicitly provide configuration on the model serving, i.e. input matrix size type etc. this is the configpb.txt file, example [ https://github.com/allegroai/clearml-serving/blob/7c1c02c9ea49c9ee6ffbdd5b59f5fd8a6f78b4e0/examples/keras/keras_mnist.py#L51 ])
clearml-serving triton --endpoint "keras_mnist" --model-project "examples" --model-name "Keras MNIST serve example - serving_model"
Then you launch the control Task (2) (this is the one we just configured, by default it qwill launch on the services queue, you can also spin an additional agent to listen to the services queue).
The control Task is actually the Task that creates the serving Task, and enqueues it.
(The idea is that it will do auto load balancing, based on serving performance, right now it is still static).
To control the way the serving Task is created and enqueued , check the full help:
` clearml-serving --help

clearml-serving triton --help `

  
  
Posted 2 years ago

I'm assuming the triton serving engine is running on the serving queue in my case. Is the serving example also running on the serving queue or is it running on the services queue? And lastly, I don't have a clearml agent listening to the services queue, does clearml do this on its own?

  
  
Posted 2 years ago

Also, the steps say that I should run the serving process on the default queue but I've run it on a queue I created called a serving queue and have an agent listening for it.

  
  
Posted 2 years ago

Hi VexedCat68
Yes the serving is a bit complicated. Let me try to explain the underlying setup, before going into more details.

clearml-serving CLI is a tool to launch / setup. (it does the configuration and enqueuing not the actual serving) control plan Task -> Storing the state of the serving (i.e. which end points needs to be served, what models are used, collects stats). This Task has no actual communication with the serving requests/replies (Running on the services queue) Serving Task -> actual Task doing the serving (supports multiple instances). This is where the requested are routed to, and where the inference happens. It pulls the configuration from the controlplan Task, and configure itself based on it. it also reports back stats to the controlplan on its performance. This is where The Triton Engine is running, inside the Triton container with clearml-running inside the same container pulling the actual models and feeding them to Triton server (Running on a GPU/CPU queue)
Does that make sense ?

  
  
Posted 2 years ago

As I wrap my head around that, in terms of the example given in the repo, can you tell me what the serving example is in terms of the explanation above and what the triton serving engine is, in context to the above explanation

  
  
Posted 2 years ago
568 Views
5 Answers
2 years ago
one year ago
Tags
Similar posts