Unanswered
Hi, Is There A Means To Leverage On Clearml To Run A Ml Inference Container That Does Not Terminate?
Thanks @<1523701205467926528:profile|AgitatedDove14> . what I could think of is to write a task that may run python subproecss to do "helm install". In those python script, we could point to /download the helm chart from somewhere (e.g. nfs, s3).
Does this sound right to u?
Anything that I was wondering is if we could pass the helm charts /files when we uses clearml sdk, so we could minimise the step to push them to the nfs/s3.
73 Views
0
Answers
5 months ago
5 months ago