Unanswered
Hi There, I’Ve Been Trying To Play Around With The Model Inference Pipeline Following
I’m running it through docker-compose . Tried both with and without triton
Hmm still facing same issue actually…
print("This runs!")
predict_a = self.send_request(endpoint="/test_model_sklearn_a/", version=None, data=data)
predict_b = self.send_request(endpoint="/test_model_sklearn_b/", version=None, data=data)
print("Doesn't get here", predict_a, predict_b)
And still, hitting the endpoints independently using curls works. Are you able to replicate this?
200 Views
0
Answers
one year ago
one year ago