Unanswered
			
			
 
			
	
		
			
		
		
		
		
	
			
		
		Hi, Is There A Means To Leverage On Clearml To Run A Ml Inference Container That Does Not Terminate?
To clarify, there might be cases where we get helm chart /k8s manifests to deploy a inference services. A black box to us.
I see, in that event, yes you could use clearml queues to do that, as long as you have the credentials the "Task" is basically just a deployment helm task.
You could also have a monitoring code there so that the same Task is pure logic, spinning the helm chart, monitoring the usage, and when it's done taking it down
125 Views
				0
Answers
				
					 
	9 months ago
				
					
						 
	9 months ago