Unanswered
Hi, Is There A Means To Leverage On Clearml To Run A Ml Inference Container That Does Not Terminate?
Can clearml-serving does helm install or upgrade?
Not sure I follow, how would a helm chart install be part of the ml running ? I mean clearml-serving is installed via helm chart, but this is a "one time" i.e. you install the clearm-serving and then you can via CLI / python send models to be served there, this is not a "deployed per model" scenario, but a deployment for multiple models, dynamically loaded
38 Views
0
Answers
2 months ago
2 months ago