Unanswered
Hi, Is There A Means To Leverage On Clearml To Run A Ml Inference Container That Does Not Terminate?
To clarify, there might be cases where we get helm chart /k8s manifests to deploy a inference services. A black box to us.
Users may need to deploy this service where needed to test out against other software components. This needs gpu resources which a queue system will allow them to queue up and eventually get this deployed instead of hard resource allocation to this purpose
47 Views
0
Answers
2 months ago
2 months ago