Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
I'D Been Following The Clearml Serving Example On Its Github Repo Here. It Basically Deploys A Keras Mnist Model. The Tutorial However Ends Once The Model Is Deployed However And I'Ve Tried Going Through Resources On How To Do Inference But Have Had Troub

I'd been following the clearml serving example on its Github repo here. It basically deploys a keras mnist model. The tutorial however ends once the model is deployed however and I've tried going through resources on how to do inference but have had trouble understanding them. Is there a simple way to send inference requests to the server? Or do I just read through the Nvidia Triton docs on grpc etc

  
  
Posted 3 years ago
Votes Newest

Answers 7


For anyone who's struggling with this. This is how I solved it. I'd personally not worked with GRPC so I instead looked at the HTTP docs and that one was much simpler to use.

  
  
Posted 3 years ago

Done it.

  
  
Posted 3 years ago

Thank you!!!

  
  
Posted 3 years ago

This is the simplest I could get for the inference request. The model and input and output names are the ones that the server wanted.

  
  
Posted 3 years ago

Thanks VexedCat68 !
This is a great example, maybe PR it to the cleamrl-servvng repo ? wdyt?

  
  
Posted 3 years ago

I'll take a look.

  
  
Posted 3 years ago
1K Views
7 Answers
3 years ago
one year ago
Tags