Unanswered
Hello! I’M Currently Using Clearml-Server As An Artifact Manager And Clearml-Serving For Model Inference, With Each Running On Separate Hosts Using Docker Compose. I’Ve Successfully Deployed A Real-Time Inference Model In Clearml-Serving, Configured Withi
Hi @<1697056701116583936:profile|JealousArcticwolf24>
Awesome deployment 🤩
Yes if you need another scalable model serving you can just run another instance of the clearml-serving-inference
https://github.com/allegroai/clearml-serving/blob/7ba356efc97a6ae2159283d198d981b3c1ab85e6/docker/docker-compose.yml#L77
So you end up with two of them, one per models environment. Notice each one should have its own unique clearml serving session
https://github.com/allegroai/clearml-serving/blob/7ba356efc97a6ae2159283d198d981b3c1ab85e6/docker/docker-compose.yml#L92
74 Views
0
Answers
7 months ago
7 months ago