Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi There, I’Ve Been Trying To Play Around With The Model Inference Pipeline Following

Hi there, I’ve been trying to play around with the model inference pipeline following this guide . I am able to of the steps (register the models), but when trying to get inference using curl (step 5), don’t really get an inference, and the command is just stuck waiting for something.

Tried adding logs to see where it’s stuck and it seems like the results of model_a and model_b (which are a future) never get resolved. I wanted to see what self.request looks looks after it’s overridden by inference engine, but could not figure it out.

Tried with & without Triton as well. Any ideas?

L26-27 never get resolved. None

  
  
Posted one year ago
Votes Newest

Answers 10


This is odd, how are you spinning clearml-serving ?
You can also do it synchronously :

predict_a = self.send_request(endpoint="/test_model_sklearn_a/", version=None, data=data)
predict_b = self.send_request(endpoint="/test_model_sklearn_b/", version=None, data=data)
  
  
Posted one year ago

I did change the

instead of 8080?

So this is the issue

  
  
Posted one year ago

I’m running it through docker-compose . Tried both with and without triton

Hmm still facing same issue actually…

print("This runs!")
predict_a = self.send_request(endpoint="/test_model_sklearn_a/", version=None, data=data)
predict_b = self.send_request(endpoint="/test_model_sklearn_b/", version=None, data=data)

print("Doesn't get here", predict_a, predict_b)

And still, hitting the endpoints independently using curls works. Are you able to replicate this? 
  
  
Posted one year ago

what do you have here in your docker compose :
None

  
  
Posted one year ago

you need to set

CLEARML_DEFAULT_BASE_SERVE_URL: 

So it knows how to access itself

  
  
Posted one year ago

Also what do you have in the "Configuration" section of the serving inference Task?

  
  
Posted one year ago

Hi @<1523701205467926528:profile|AgitatedDove14> , I already did the scikit learn examples, it works.

Also both endpoint_a and endpoint_b work when hitting them directly within the pipeline example. But not the pipeline itself.

  
  
Posted one year ago

Hi @<1547028116780617728:profile|TimelyRabbit96>
Start with the simple scikit learn example
https://github.com/allegroai/clearml-serving/tree/main/examples/sklearn
The pipeline example is more complicated, it needs the base endpoints, start simple 😃

  
  
Posted one year ago

  • Haven’t changed it, although I did change the host port to 8081 instead of 8080? Everything else seems to work fine tho.
  • Sorry what do you mean? I basically just followed the tutorial
  
  
Posted one year ago

ahh yepp, that makes sense! Thank you so much!

  
  
Posted one year ago
1K Views
10 Answers
one year ago
one year ago
Tags