Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Unanswered
Problem: Excessive Scalar Storage From Tensorboard Integration Causing Out-Of-Memory On Clearml Server Hi Team, We’Ve Run Into A Problem With Clearml Ingesting Extremely Large Numbers Of Scalars From Tensorboard (Auto_Connect_Frameworks) (~800K Samples P


Hi @<1523701070390366208:profile|CostlyOstrich36> , thank you for your quick response and for confirming the current behavior.
We’ve already tried the approaches you mentioned—disabling the TensorBoard autologging and reducing the scalar logging frequency—which definitely help for new experiments going forward.
However, our main challenge is with existing (“legacy”) tasks that already logged hundreds of thousands of scalars per experiment. We have temporarily alleviated the problem by adding more RAM. However, this isn't a sustainable solution
It would be extremely helpful to have a way to downsample or prune excessive scalars for past/existing tasks directly on the server. Is there any possibility that ClearML might implement such a feature or provide an admin tool/script for this purpose? This would be very valuable for maintenance, resource management, and scalability.
Thank you again!

  
  
Posted one month ago
32 Views
0 Answers
one month ago
one month ago