Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
A Question Regarding Optimizing Api Usage For Production Monitoring

A Question regarding optimizing API usage for production monitoring
Hi, I'm currently using ClearML for monitoring a production environment where I report metrics during each inference.
Specifically, I monitor and report accuracy for around 5 features. My API usage has been very high, and I'm looking for ways to make it more efficient.

To reduce usage, I've already stopped reporting logs and machine performance metrics, but the API usage is still high. Here's a summary of my current setup:

  • Metrics Reporting: I report accuracy for 5 features after each inference.
  • Last Iteration Check: I make API requests to get the last iteration number.
  • Persistent Connection: The connection to ClearML remains open all the time.I'm not sure whether it's better to keep the connection open or to close and reopen it multiple times during the process.

My questions are:

  • What are the best practices for optimizing API usage in this scenario?
  • Can I report multiple scalars/metrices on the same API call?
  • Is it more efficient to keep the connection open or to reopen it periodically?Any advice or suggestions on how to further optimize my setup would be greatly appreciated. Thanks!
  
  
Posted 3 months ago
Votes Newest

Answers 2


Hi @<1724960475575226368:profile|GloriousKoala29> ,

I don't think closing the connection will matter much. As for reporting multiple scalars in the same call, this is already being done - when you use the logger to report scalars/metrics/logs, it all goes into a buffer and the underlying reporter module will batch it into periodic API calls aggregating multiple events into a single call

  
  
Posted 3 months ago

I see, thanks Jake!

  
  
Posted 3 months ago
182 Views
2 Answers
3 months ago
3 months ago
Tags