Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi, Is There A Means To Leverage On Clearml To Run A Ml Inference Container That Does Not Terminate?

Hi, is there a means to leverage on clearml to run a ml inference container that does not terminate?

  
  
Posted one month ago
Votes Newest

Answers 10


How can I make a task that does a helm install or kubectl create deployment.yaml?

The task that it launches should have your code that actually does the helm deployments and other things, thing of the Task as a way to launch a script that does something, that script can then just interact with the cluster. The queue itself (i.e. clearml-agent) will not directly deploy helm charts, it will only deploy jobs (i.e. pods)

  
  
Posted 28 days ago

AgitatedDove14 I looking at a queue system which clearml q offers that allow user to queue job to deploy an app / inference service. This cam be as simple as a pod or a more complete helm chart.

  
  
Posted 29 days ago

Hi OddShrimp85
You mean something like clearml-serving ?
None

  
  
Posted 29 days ago

AgitatedDove14 I still trying to figure out how to do so. Coz when I add a task in queue, clearml agent basically creates a pod with the container. How can I make a task that does a helm install or kubectl create deployment.yaml?

  
  
Posted 29 days ago

Can clearml-serving does helm install or upgrade? We have cases where the ml models do not come from the ml experiments in clearml. But would like to tap on clearml q to enable resource queuing.

  
  
Posted 29 days ago

A more advanced case will be to decide how long this job should run amd terminate after that. This is to improve the usage of gpu

  
  
Posted 29 days ago

To clarify, there might be cases where we get helm chart /k8s manifests to deploy a inference services. A black box to us.

I see, in that event, yes you could use clearml queues to do that, as long as you have the credentials the "Task" is basically just a deployment helm task.
You could also have a monitoring code there so that the same Task is pure logic, spinning the helm chart, monitoring the usage, and when it's done taking it down

  
  
Posted 29 days ago

Can clearml-serving does helm install or upgrade?

Not sure I follow, how would a helm chart install be part of the ml running ? I mean clearml-serving is installed via helm chart, but this is a "one time" i.e. you install the clearm-serving and then you can via CLI / python send models to be served there, this is not a "deployed per model" scenario, but a deployment for multiple models, dynamically loaded

  
  
Posted 29 days ago

To clarify, there might be cases where we get helm chart /k8s manifests to deploy a inference services. A black box to us.

Users may need to deploy this service where needed to test out against other software components. This needs gpu resources which a queue system will allow them to queue up and eventually get this deployed instead of hard resource allocation to this purpose

  
  
Posted 29 days ago

This cam be as simple as a pod or a more complete helm chart.

True, and this could be good for batch processing, but if you want restapi service then clearml-serving is probably a better fit
does that make sense ?

  
  
Posted 29 days ago
106 Views
10 Answers
one month ago
28 days ago
Tags