Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
ItchyJellyfish73
Moderator
6 Questions, 12 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

12 × Eureka!
0 Votes
2 Answers
939 Views
0 Votes 2 Answers 939 Views
3 years ago
0 Votes
6 Answers
1K Views
0 Votes 6 Answers 1K Views
Hello. I'm interested in dynamic gpu feature. But I can't find any information on how it works. Can you help me with it? Is it possible to try it somewhere ?
3 years ago
0 Votes
10 Answers
981 Views
0 Votes 10 Answers 981 Views
3 years ago
0 Votes
3 Answers
1K Views
0 Votes 3 Answers 1K Views
Hello! I got the idea of publishing model/task. But there could be scenarios when it still should be archived/deleted. For instance death of project. Is it p...
3 years ago
0 Votes
2 Answers
852 Views
0 Votes 2 Answers 852 Views
3 years ago
0 Votes
5 Answers
1K Views
0 Votes 5 Answers 1K Views
3 years ago
0 Hello Periodically Under High Load, We Are Facing Too Long(>1 Sec) Processing Times For Requests Such As: Workers.Status_Report Events.Add_Batch Queues.Get_Next_Task. Also There Are Warnings "Connection Pool Is Full, Discarding Connection: Elasticsearch-S

As I discovered, this was ES overload due to incorrect ClearML usage: report_scalar was called 100 times per sec(developer reported each sample from wav file). This didn't affect apieserver, because events were batched. Probably there should be some protection against overload on clearml package or apiserver level, as developers could do any crazy stuff in their code 🙃

3 years ago
0 Hello Periodically Under High Load, We Are Facing Too Long(>1 Sec) Processing Times For Requests Such As: Workers.Status_Report Events.Add_Batch Queues.Get_Next_Task. Also There Are Warnings "Connection Pool Is Full, Discarding Connection: Elasticsearch-S

AgitatedDove14 are you sure ? Api server has low CPU load( < 10% ). Moreover only requests related to ES are affected, other requests (like tasks.get_all or queues.get_all) are < 10ms

3 years ago
0 Hello! Can You Clarify, How We Can Support Following Scenario With Clearml. We Have Single Clearml Server With Multiple Workers In Docker Mode. We Also Have Multiple Teams. They Work On Different Projects Stored In Different Repositories(Public/Private Gi

Thanks! This works for me except one thing. This work only with keys wit standard names. If keys have non-standard names should I deal with starting ssh-agent and ssh-add inside docker or there is simple way ?

3 years ago
0 Hello! Can You Clarify, How We Can Support Following Scenario With Clearml. We Have Single Clearml Server With Multiple Workers In Docker Mode. We Also Have Multiple Teams. They Work On Different Projects Stored In Different Repositories(Public/Private Gi

So, did I understand you correctly? I create single ssh key and place it to ~/.ssh dir of all workers. After that anyone, who wants to run task on their repo, should add this key to their user in their repo.

3 years ago
3 years ago