To clarify, there might be cases where we get helm chart /k8s manifests to deploy a inference services. A black box to us.
Users may need to deploy this service where needed to test out against other software components. This needs gpu resources which a queue system will allow them to queue up and eventually get this deployed instead of hard resource allocation to this purpose
@<1523701205467926528:profile|AgitatedDove14> I looking at a queue system which clearml q offers that allow user to queue job to deploy an app / inference service. This cam be as simple as a pod or a more complete helm chart.
To clarify, there might be cases where we get helm chart /k8s manifests to deploy a inference services. A black box to us.
I see, in that event, yes you could use clearml queues to do that, as long as you have the credentials the "Task" is basically just a deployment helm task.
You could also have a monitoring code there so that the same Task is pure logic, spinning the helm chart, monitoring the usage, and when it's done taking it down
@<1523701205467926528:profile|AgitatedDove14> I still trying to figure out how to do so. Coz when I add a task in queue, clearml agent basically creates a pod with the container. How can I make a task that does a helm install or kubectl create deployment.yaml?
Can clearml-serving does helm install or upgrade?
Not sure I follow, how would a helm chart install be part of the ml running ? I mean clearml-serving is installed via helm chart, but this is a "one time" i.e. you install the clearm-serving and then you can via CLI / python send models to be served there, this is not a "deployed per model" scenario, but a deployment for multiple models, dynamically loaded
How can I make a task that does a helm install or kubectl create deployment.yaml?
The task that it launches should have your code that actually does the helm deployments and other things, thing of the Task as a way to launch a script that does something, that script can then just interact with the cluster. The queue itself (i.e. clearml-agent) will not directly deploy helm charts, it will only deploy jobs (i.e. pods)
Yeah that sound about right, also you can put the helm chart file as a configuration on the Task when creating it, see https://clear.ml/docs/latest/docs/references/sdk/task#set_configuration_object
Thanks @<1523701205467926528:profile|AgitatedDove14> . what I could think of is to write a task that may run python subproecss to do "helm install". In those python script, we could point to /download the helm chart from somewhere (e.g. nfs, s3).
Does this sound right to u?
Anything that I was wondering is if we could pass the helm charts /files when we uses clearml sdk, so we could minimise the step to push them to the nfs/s3.
Hi @<1523701304709353472:profile|OddShrimp85>
You mean something like clearml-serving ?
None
This cam be as simple as a pod or a more complete helm chart.
True, and this could be good for batch processing, but if you want restapi service then clearml-serving is probably a better fit
does that make sense ?
Can clearml-serving does helm install or upgrade? We have cases where the ml models do not come from the ml experiments in clearml. But would like to tap on clearml q to enable resource queuing.
A more advanced case will be to decide how long this job should run amd terminate after that. This is to improve the usage of gpu