Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Unanswered
Hi! I'M Currently Considering Switching To Clearml. In My Current Trials I Am Using Up The Api Calls Very Quickly Though. Is There Some Way To Limit That? The Documentation Is A Bit Sparse On What Uses How Many Api Calls. Is It Possible To Batch Them For


They are batched together, so at least in theory if this is fast you should not get to 10K so fast, But a Very good point

That's only a back of the napkin calculation, in the actual experiments I mostly had stream logging, hardware monitoring etc. enabled as well so maybe that limited the effectiveness of the batching. I just saw that I went through the first 200k API calls rather fast, so that is how I rationalized it.

Basically this is the "auto flush" it will flash (and batch) all the logs in 30sec period, and yes this is for all the logs (scalar and console)

Perfect, sounds like that is exactly what I'm looking for 🙂

How often do you report scalars ?
Could it be they are Not being batched for some reason?

Once every 2000 steps, which is every few seconds. So in theory those ~20 scalars should be batched since they are reported more or less at the same time. It's a bit odd that the API calls added up so quickly anyway.

I'll try to decrease the flush frequency (once a minute or even every few minutes is plenty for my use case) and see if it reduces the API calls. Thank you for your help!

  
  
Posted 2 years ago
152 Views
0 Answers
2 years ago
one year ago