Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi devs :slightly_smiling_face: I am still having an issue that has been there for a long time: When doing a large batch of deletes (e.g. 100s of experiments with millions of steps each), the clearml-server API will become unresponsive for minutes to hour

Hi devs 🙂
I am still having an issue that has been there for a long time: When doing a large batch of deletes (e.g. 100s of experiments with millions of steps each), the clearml-server API will become unresponsive for minutes to hours.
This is something that I have experienced with every version so far. Do you have any idea, why it happens (maybe something like index rebuilding?) and how to mitigate it?

  
  
Posted 12 days ago
Votes Newest

Answers 3


Hi ReassuredTiger98 , this simply might be an issue of too few handler processes un the ClearML apiserver (unless the pressure is on the databases). You can easily change that (the default is 8 ) using the CLEARML_GUNICORN_WORKERS environment variable passed to the apiserver service)

  
  
Posted 11 days ago

Hi Jake, thank you very much for the suggestion. I will try that!

  
  
Posted 9 days ago

Hi SuccessfulKoala55 I tested it and this seems not to be the issue. When looking at my server I can see that ElasticSearch utilizes a single core with 100%. This is the only observation that I made. May it be that elasticsearch is the issue?

  
  
Posted 5 days ago
71 Views
3 Answers
12 days ago
5 days ago
Tags