Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi Community! I'M Currently Trying To Serve My Ai Model Using Clearml-Serving So I Can Access And Try My Model Through The Model Endpoint. Currently The Dataflow Of Clearml-Serving I Know Looks Like On This Diagram 1 (Model As A Rest Service). How Ever I

Hi Community!
I'm currently trying to serve my AI model using clearml-serving so I can access and try my model through the model endpoint. Currently the dataflow of clearml-serving I know looks like on this diagram 1 (Model as a REST Service). How ever I want to change the dataflow implementation from diagram 1 (Model as a REST Service) to diagram 2 (Model alongside the pipeline), is it possible? and how?

  
  
Posted 2 years ago
Votes Newest

Answers 7


Yes, that make sense, thank you for your help AgitatedDove14

  
  
Posted 2 years ago

Oh I see, this seems like Triton configuration issue, usually dim -1 means flexible. I can also mention that serving 1.1 should be released later this week with better multiple input support for triton. Does that make sense?

  
  
Posted 2 years ago

If this is the case why not have the stream process call the rest api, then move forward with the result? This way it scales out of the box, the main "conceptual" difference is that the restapi is used internally, and the upside is the event streaming processing becomes part of the application layer, not tied with the compute cost of the model , wdyt?

  
  
Posted 2 years ago

MoodyCentipede68 is diagram 2 a batch processing workflow?

  
  
Posted 2 years ago

Hello AgitatedDove14 , based on the picture below, I think it's stream processing, not batch.

And the executor do preprocessing and create a data to fit to model

  
  
Posted 2 years ago

And is Exectuer actually runs something, or is it IO?

  
  
Posted 2 years ago

Actually AgitatedDove14 , let me try to explain my problem more clearly.

When I'm trying to serve my model with clearml-serving, the expected input-size for my AI model is always [1,60,1]. What I need is that model served by clearml-serving can receive the input-size dynamically.Is there any solution for the model to be able to receive the input size dynamically (especially dynamic for the first dimension) like [10,60,1] or [23000,60,1] etc?
Here are some diagram to help me explain this.

  
  
Posted 2 years ago
1K Views
7 Answers
2 years ago
one year ago
Tags