Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Unanswered
Does Clearml Support Running The Experiments On Any "Serverless" Environments (I.E. Vertexai, Sagemaker, Etc.), Such That Gpu Resources Are Allocated On Demand? Alternatively, Is There A Story For Auto-Scaling Gpu Machines Based On Experiments Waiting In


Hi IcyJellyfish61 , while spinning up and down EKS is not supported (albeit very cool 😄 ) we have an autoscaler in the applications section that does exactly what you need, spin up and down EC2 instances according to demand 🙂
If you're using http://app.clear.ml as you server, you can find it at https://app.clear.ml/applications .
Unfortunately, it is unavailable for the opensource server and only to paid tiers.

  
  
Posted 2 years ago
178 Views
0 Answers
2 years ago
one year ago
Tags