Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Is The Only Available Resource To Learn And Use Clearml-Serving, The Github Repo, Or Are Their Other Resources As Well? Also, In The Repo, Once The Model Is Served, It Says I Can Curl To The End Point And It Mentions <Serving-Engine-Ip> But I Have No Ide

Is the only available resource to learn and use ClearML-Serving, the github repo, or are their other resources as well?

Also, in the repo, once the model is served, it says I can curl to the end point and it mentions <serving-engine-ip> but I have no idea what the serving engine ip is.

  
  
Posted 2 years ago
Votes Newest

Answers 11


I just followed the instructions here at https://github.com/allegroai/clearml-serving

In the end it says I can curl at the end point and mentions the serving-engine-ip but I cant find the ip anywhere.

  
  
Posted 2 years ago

Is it the IP of the agent?

  
  
Posted 2 years ago

It was working fine for a while but then it just failed.

  
  
Posted 2 years ago

I've tried the ip of the ClearML Server and the IP of my local machine on which the agent is also running on and none of the two work.

  
  
Posted 2 years ago

Basically want to be able to serve a model, and also send requests to it for inference.

  
  
Posted 2 years ago

Anyway I restarted the triton serving engine.

  
  
Posted 2 years ago

Did you raise a serving engine?

  
  
Posted 2 years ago

Yeah I think I did. I followed the tutorial on the repo.

  
  
Posted 2 years ago

image

  
  
Posted 2 years ago

I think the serving engine ip depends on how you set it up

  
  
Posted 2 years ago

Have never done something like this before, and I'm unsure about the whole process from successfully serving the model to sending requests to it for inference. Is there any tutorial or example for it?

  
  
Posted 2 years ago
1K Views
11 Answers
2 years ago
7 months ago
Tags