Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
IcyJellyfish61
Moderator
3 Questions, 5 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

5 × Eureka!
0 Votes
4 Answers
1K Views
0 Votes 4 Answers 1K Views
Q: In the managed solutions, do customers have direct access to the databases (ES, Mongo, Redis), or at least an ability to export once in a while?
2 years ago
0 Votes
6 Answers
1K Views
0 Votes 6 Answers 1K Views
Hi! Does ClearML self-hosted supports any managed solutions for its ES, Mongo and Redis dependencies?
2 years ago
0 Votes
3 Answers
1K Views
0 Votes 3 Answers 1K Views
2 years ago
0 Hi! Does Clearml Self-Hosted Supports Any Managed Solutions For Its Es, Mongo And Redis Dependencies?

Thanks, that sounds reasonable, but are there any known compatibility issues ? There is probably some experience with managed solution for those three services with ClearML

2 years ago
0 Does Clearml Support Running The Experiments On Any "Serverless" Environments (I.E. Vertexai, Sagemaker, Etc.), Such That Gpu Resources Are Allocated On Demand? Alternatively, Is There A Story For Auto-Scaling Gpu Machines Based On Experiments Waiting In

re. "serverless" I mean running a training task on cloud services such that machines with GPUs for those tasks are provisioned on demand.
That means we don't have to keep a pool of machines with GPUs standing by, and don't have to deal with autoscaling. The cloud provider, upon receipt of such a training task, provisions the machines and runs the training.
This is a common use case for example in VertexAI.

Regarding Autoscaling - yes, autoscaling EC2 instances for example based on pending e...

2 years ago