This is odd, how are you spinning clearml-serving ?
You can also do it synchronously :
predict_a = self.send_request(endpoint="/test_model_sklearn_a/", version=None, data=data)
predict_b = self.send_request(endpoint="/test_model_sklearn_b/", version=None, data=data)
I did change the
instead of 8080?
So this is the issue
I’m running it through docker-compose . Tried both with and without triton
Hmm still facing same issue actually…
print("This runs!")
predict_a = self.send_request(endpoint="/test_model_sklearn_a/", version=None, data=data)
predict_b = self.send_request(endpoint="/test_model_sklearn_b/", version=None, data=data)
print("Doesn't get here", predict_a, predict_b)
And still, hitting the endpoints independently using curls works. Are you able to replicate this?
what do you have here in your docker compose :
None
you need to set
CLEARML_DEFAULT_BASE_SERVE_URL:
So it knows how to access itself
Also what do you have in the "Configuration" section of the serving inference Task?
Hi @<1523701205467926528:profile|AgitatedDove14> , I already did the scikit learn examples, it works.
Also both endpoint_a
and endpoint_b
work when hitting them directly within the pipeline example. But not the pipeline itself.
Hi @<1547028116780617728:profile|TimelyRabbit96>
Start with the simple scikit learn example
https://github.com/allegroai/clearml-serving/tree/main/examples/sklearn
The pipeline example is more complicated, it needs the base endpoints, start simple 😃
- Haven’t changed it, although I did change the host port to 8081 instead of 8080? Everything else seems to work fine tho.
- Sorry what do you mean? I basically just followed the tutorial
ahh yepp, that makes sense! Thank you so much!