Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi, I'D Like To Know If There Is A Way To Include A Process Like Aws Autoscaler And Its Configurations Inside The Clearml Helm Chart. My Goal Is To Automatically Run The Aws Autoscaler Task On A Clearml-Agent Pod When I Deploy The Clearml Services On The

Hi, I'd like to know if there is a way to include a process like AWS Autoscaler and its configurations inside the clearml helm chart. My goal is to automatically run the AWS Autoscaler task on a clearml-agent pod when I deploy the clearml services on the cluster and also to re-run it if the clearml-agent pod is recreated. I manage the cluster deployment using Terraform and Helm on AWS EKS

  
  
Posted one year ago
Votes Newest

Answers 8


but I'd prefer to have a new instance deployed for each new experiment and that it also terminates when no new experiments are queued

I'm not objecting, just wondered on the rational behind the decision πŸ™‚
Back to the AWS autoscaler:
Basically if you have the services-agent running on your cluster, it will just run the aws-autoscaler for you πŸ™‚
The idea of the service-agent is to run logic/monitoring Tasks suck as the aws autoscaler. Notice that service-mode means multiple job per agent, contrary to the default one task per agent at any given time.
If you want you can package the aws-autoscaler example inside a docker and just spin it, you can use the clearml-agent docker file as a good starting point. wdyt?

  
  
Posted one year ago

Hi AgitatedDove14 , do you mean the the k8s glue autoscaler here https://github.com/allegroai/clearml-agent/blob/master/examples/k8s_glue_example.py ? If yes, I understood that this service deploys pods on the nodes in the cluster, but I'd prefer to have a new instance deployed for each new experiment and that it also terminates when no new experiments are queued

  
  
Posted one year ago

AgitatedDove14 that seems like the best option. Once the aws autoscaler is inside a docker container I can deploy it inside a kube pod or a job. This, however, requires that I slightly modify the clearml helm chart with the aws-autoscaler deployment, right?

  
  
Posted one year ago

My goal is to automatically run the AWS Autoscaler task on a clearml-agent pod when I deploy

LovelyHamster1 this is very cool!
quick question, if you are running on EKS, why not use the EKS autoscaling instead of the ClearML aws EC2 autoscaling ?

  
  
Posted one year ago

This, however, requires that I slightly modify the clearml helm chart with the aws-autoscaler deployment, right?

Correct πŸ™‚

  
  
Posted one year ago

I use a custom helm chart and terraform helm provider for these things

  
  
Posted one year ago

Nice! TrickySheep9 any chance you can share them ?

  
  
Posted one year ago

I just run the k8s daemon with a simple helm chart and use it with terraform with the helm provider. Nothing much to share as it’s just a basic chart πŸ™‚

  
  
Posted one year ago
113 Views
8 Answers
one year ago
4 months ago
Tags