Unanswered
Hi There, I’Ve Been Trying To Play Around With The Model Inference Pipeline Following
This is odd, how are you spinning clearml-serving ?
You can also do it synchronously :
predict_a = self.send_request(endpoint="/test_model_sklearn_a/", version=None, data=data)
predict_b = self.send_request(endpoint="/test_model_sklearn_b/", version=None, data=data)
155 Views
0
Answers
one year ago
one year ago