I just followed the instructions here at https://github.com/allegroai/clearml-serving
In the end it says I can curl at the end point and mentions the serving-engine-ip but I cant find the ip anywhere.
It was working fine for a while but then it just failed.
I've tried the ip of the ClearML Server and the IP of my local machine on which the agent is also running on and none of the two work.
Basically want to be able to serve a model, and also send requests to it for inference.
Anyway I restarted the triton serving engine.
Yeah I think I did. I followed the tutorial on the repo.
I think the serving engine ip depends on how you set it up
Have never done something like this before, and I'm unsure about the whole process from successfully serving the model to sending requests to it for inference. Is there any tutorial or example for it?