Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
I Saw A Guide On Setting Up Clearml Server In Kubernetes.

I saw a guide on setting up clearml server in kubernetes. https://allegro.ai/clearml/docs/docs/deploying_clearml/clearml_server_kubernetes.html .
However, i was looking at setting a kubernetes cluster which has both [clearml server , clearml-agents].. So when the workload increases, more deployments with clearml-agents are launched. Is there such a guide available or has anyone tried it before ?

  
  
Posted 3 years ago
Votes Newest

Answers 17


Hi SuccessfulKoala55 , kkie..
1)Actually, now i am using AWS. I am trying to set up Clearml server in K8. However, clearml-agents will be just another ec2-instance/docker image.
2) For phase 2, I will try Clearml AWS AutoScaler Service.
3) At this point, I think I will have a crack at JuicyFox94 's solution as well.

  
  
Posted 3 years ago

Thanks JuicyFox94 .
Not really from devops background, Let me try to digest this.. 🙏

  
  
Posted 3 years ago

Hi DeliciousBluewhale87 ,
As far as I know, JuicyFox94 ’s charts do not yet deal with dynamic scaling of ClearML Agents ( JuicyFox94 feel free to correct me 🙂 )
This is currently supported in the AWS Auto-Scaler (which is both a working implementation and an example template on how to accomplish such a auto-scaler, regardless of the platform used).
We do have plans to support this kind of scaling for K8s in the near future 🙂

  
  
Posted 3 years ago

Hi DeliciousBluewhale87 , I'm already using an on-premise config (with GitOps paradigm) using a custom helm chart. maybe this is interesting for you

  
  
Posted 3 years ago

We have to do it in-premise.. Cloud providers are not allowed for the final implementation. Of course, now we use Cloud to test out our ideas.

  
  
Posted 3 years ago

Hi DeliciousBluewhale87 .

Which cloud provider are you using? If AWS, you can use the https://allegro.ai/clearml/docs/docs/examples/services/aws_autoscaler/aws_autoscaler.html#clearml-aws-autoscaler-service .

Can this do the trick?

  
  
Posted 3 years ago

In this case I apologize for confusion. If you are going for AWS autoscaler it's better to follow official way to go, the solution I proposed is for an onpremise cluster containing every componenet without autoscaler. sorry for

  
  
Posted 3 years ago

this is the chart with various group of agents configurable https://artifacthub.io/packages/helm/valeriano-manassero/clearml

  
  
Posted 3 years ago

SuccessfulKoala55 yes, no autoscaler on that chart. Maybe I'm missing the point but the request was for an "on-premise" setup so I guessed no aws. If I missed the point everything I posted is not useful 😄

  
  
Posted 3 years ago

Our main goal, maybe I shld have stated prior. We are data scientists who need a mlops environment to track and also run our experiments..

  
  
Posted 3 years ago

moreover if you are using minikube you can take a try on official helm chart https://github.com/allegroai/clearml-server-helm

  
  
Posted 3 years ago

this is the state of the cluster https://github.com/valeriano-manassero/mlops-k8s-infra

  
  
Posted 3 years ago

if you need a not automated way to create the cluster I suggest to take in consideration helm chart only.

  
  
Posted 3 years ago

today it's pretty busy for me but I can try to help if needed, pls put any question here if you have and I will try to answer when possible

  
  
Posted 3 years ago

nice.. this looks a bit friendly.. 🙂 .. Let me try it.. Thanks

  
  
Posted 3 years ago

sure, I'll post some questions once I wrap my mind around it..

  
  
Posted 3 years ago

Just to add on, I am using minikube now.

  
  
Posted 3 years ago
1K Views
17 Answers
3 years ago
one year ago
Tags