Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
I Seem To Be Missing Something ... I'Ve Only Got One Task Running To Train A Segmentation Model On My Local Machine, And In A Few Days It'S Hit Over 1.15M Api Calls. It Looks Like It'S Sending Every Single Console Output ... Are There Settings To Control

I seem to be missing something ... I've only got one task running to train a segmentation model on my local machine, and in a few days it's hit over 1.15M API calls. It looks like it's sending every single console output ... are there settings to control what gets logged? I only care about the results from each epoch. I don't need each line of the console posted up ( that's 99% of the API usage right there ). I can't find a way to prevent this and can see each line in the clearml console that's already in my terminal window ( each tick in the progress bar for each epoch seems to be an API call to post that local console output to clearml ). Any tips to stop console from getting sent?

  
  
Posted one year ago
Votes Newest

Answers 51


My training is on roughly 50 classes as a subset of the Open Images Dataset for Segmentation

  
  
Posted one year ago

Came to ClearML since it had slick dashboard and showed me the info that mattered. Loved that I could share the results of each epoch so we could make sure things were headed in the correct direction.

  
  
Posted one year ago

Math checks out that if I was generating around 140K a day, and this had been running for 9 days, it had 1.2M when I caught it . So I think the next day after I shut it down I was seeing previous days numbers before shut down added . And another 24 hours it barely changed, so ya, it was 100% the stdout logging .

  
  
Posted one year ago

It'd be great if it just posted to clearml after each epoch is completed and the CSV with the results gets updated . I only care about using the dashboard to track completed progress . I can use my local computers terminal window to monitor current epoch training . No need to send that to clearml every second ;) Results once an hour or so is fine after each completes :)

  
  
Posted one year ago

Hmm if this is case, you can add some prints in here:
None
the service/action will tell you what you are sending
wdyt?

  
  
Posted one year ago

Just wish I could actually see somewhere what is being sent over API so I could know where to focus my efforts to refine this kind of stuff 😉

  
  
Posted one year ago

Since it's literally something we have to pay for ( which I signed up to do ) I would love to know what drives this cost

  
  
Posted one year ago

In case of scalars it is easy to see (maximum number of iterations is a good starting point

  
  
Posted one year ago

Welp, it's been a day with the new settings, and stats went up 140K for API calls 😢 ... going to check again tomorrow to see if any of that was spill over from yesterday

  
  
Posted one year ago

Would love to just cap it at a fixed amount for a month for API calls.

Try the timeout configuration, I think this shoud solve all your issues, and will be fairly easy to set for everyone

  
  
Posted one year ago

In future collab community videos and sample source for YoloV8, might be worthwhile to call that out as something folks might want to turn off unless they need it :) . Like I mentioned, I had no idea it was going to do that and sent your servers over 1.4M API hits unintentionally : (

  
  
Posted one year ago

is number of calls performed, not what those calls were.

oh, yes this is just a measure of how many API calls are sent.
It does not really matter which ones

  
  
Posted one year ago

Literally all there is, ha ha
image

  
  
Posted one year ago

Ya, sorry, I meant that if you needed more info on what was being run, it was in that screenshot ( showed instances/epochs/batch size, etc ) . But yes, it's since been disabled .

  
  
Posted one year ago

Scary to think how common that might be, could be interesting way to optimize your platform, detect excessive console logging and prompt user to confirm continued usage ( or link to docs on how to disable if they want to stop it )

  
  
Posted one year ago

SuccessfulKoala55 You are my hero !!! This is EXACTLY what I needed !!!

  
  
Posted one year ago

Hi GleamingSeagull15
Try adjusting:
None
to 30 sec
It will reduce the number of log reports (i.e. API calls)

  
  
Posted one year ago

(Not sure it actually has that information)

  
  
Posted one year ago

Glad I got that sorted. I was OK being a paying customer, but gettin overage charges for that console stuff would have been a bummer if we had not figured it out. Next month things should be back to normal 😉

  
  
Posted one year ago

I did notice that the last 24 hours I dropped quite a bit, so my theory that the 140K might have some spillover from previous day might have been correct. Last 24 hours went from 1.24M to 1.32M, so about half as much as the day before, with the same training running.

  
  
Posted one year ago

this one, right ? report_period_sec in ~/clearml.conf correct ?

  
  
Posted one year ago