Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
I Seem To Be Missing Something ... I'Ve Only Got One Task Running To Train A Segmentation Model On My Local Machine, And In A Few Days It'S Hit Over 1.15M Api Calls. It Looks Like It'S Sending Every Single Console Output ... Are There Settings To Control

I seem to be missing something ... I've only got one task running to train a segmentation model on my local machine, and in a few days it's hit over 1.15M API calls. It looks like it's sending every single console output ... are there settings to control what gets logged? I only care about the results from each epoch. I don't need each line of the console posted up ( that's 99% of the API usage right there ). I can't find a way to prevent this and can see each line in the clearml console that's already in my terminal window ( each tick in the progress bar for each epoch seems to be an API call to post that local console output to clearml ). Any tips to stop console from getting sent?

  
  
Posted 2 years ago
Votes Newest

Answers 51


FYI, I did not even know to look into this until I logged in and saw that I was being throttled because I had hit my monthly limit with API calls ( on my very first use of your platform ), and my last dozen or so epochs were just not even logged ( also a bummer ). I only had that one model in training, and thought there was no way I sent over a million API requests, so had to figure out where those were coming from, and tracked it down to that STDOUT, and was like ... wait, what?!?! Found that console tab, which I did not even use before, and saw that screenshot I posted, and was like ... well, there's your problem, ha ha

  
  
Posted 2 years ago

Since it's literally something we have to pay for ( which I signed up to do ) I would love to know what drives this cost

  
  
Posted 2 years ago

I guess last followup question, is there a way to cap costs? Like if this is running at this scale, I am not sure I can use ClearML for my purpose if I am just going to get overage charged repeatedly ( which I am already looking like I will be doing ).

  
  
Posted 2 years ago

Just wish I could actually see somewhere what is being sent over API so I could know where to focus my efforts to refine this kind of stuff 😉

  
  
Posted 2 years ago

Is there a place in ClearML that shows Platform Usage? Like, what's actually taking up the API calls?

  
  
Posted 2 years ago

Welp, it's been a day with the new settings, and stats went up 140K for API calls 😢 ... going to check again tomorrow to see if any of that was spill over from yesterday

  
  
Posted 2 years ago

Ya, sorry, I meant that if you needed more info on what was being run, it was in that screenshot ( showed instances/epochs/batch size, etc ) . But yes, it's since been disabled .

  
  
Posted 2 years ago

Scary to think how common that might be, could be interesting way to optimize your platform, detect excessive console logging and prompt user to confirm continued usage ( or link to docs on how to disable if they want to stop it )

  
  
Posted 2 years ago

I had no idea it was going to do that and sent your servers over 1.4M API hits unintentionally

Yeah, that is way too much, I think relates to the frequency it updates the console 😞

  
  
Posted 2 years ago

well, in my case, if I am trying to make sure I do not go over the allotted usage, it matters, as I am already hitting the ceiling and I have no idea what is pushing this volume of data

  
  
Posted 2 years ago

might be a feature request then, as ya, having transparency into something we are charged for would be nice. At this point, I have zero idea what is driving this usage and just want to make sure the costs for training do not bloat too much. I personally am just using ClearML as a central dashboard for a few people. I don't need it to be live data, I just need a rough overview of progress. Even if it only posted updates to ClearML once an hour, that is honestly fine.

  
  
Posted 2 years ago

hmmm, this is just a personal project, honestly was just hoping this would let me take the results of each epoch and put it in a central dashboard. Having this generate 1M+ api calls and only being like 1/4 of the way though training is a bit much. Current pricing is $1/100K API calls at the PRO tear, which I am on ... so it would be like another $50 just in API calls at this pace 😞 Would love to just cap it at a fixed amount for a month for API calls.

  
  
Posted 2 years ago

Hmm if this is case, you can add some prints in here:
None
the service/action will tell you what you are sending
wdyt?

  
  
Posted 2 years ago

Hi @<1572395184505753600:profile|GleamingSeagull15>
Try adjusting:
None
to 30 sec
It will reduce the number of log reports (i.e. API calls)

  
  
Posted 2 years ago

Thanks, will do. Heck, for my use case, I only need like once every 10 minutes.

  
  
Posted 2 years ago

well from 2 to 30sec is a factor of 15, I think this is a good start 🙂

  
  
Posted 2 years ago

Came to ClearML since it had slick dashboard and showed me the info that mattered. Loved that I could share the results of each epoch so we could make sure things were headed in the correct direction.

  
  
Posted 2 years ago

Correct

  
  
Posted 2 years ago

Welp, it's been a day with the new settings, and stats went up 140K for API calls

... going to check again tomorrow to see if any of that was spill over from yesterday

140K calls a day, how often are you sending scalars ? how long is it running? how many experiments are running ?

  
  
Posted 2 years ago

Glad I got that sorted. I was OK being a paying customer, but gettin overage charges for that console stuff would have been a bummer if we had not figured it out. Next month things should be back to normal 😉

  
  
Posted 2 years ago

In future collab community videos and sample source for YoloV8, might be worthwhile to call that out as something folks might want to turn off unless they need it :) . Like I mentioned, I had no idea it was going to do that and sent your servers over 1.4M API hits unintentionally : (

  
  
Posted 2 years ago

@<1572395184505753600:profile|GleamingSeagull15> see " Can I control what ClearML automatically logs? " in None (specifically the auto_connect_frameworks argument to Task.init() )

  
  
Posted 2 years ago

@<1523701087100473344:profile|SuccessfulKoala55> You are my hero !!! This is EXACTLY what I needed !!!

  
  
Posted 2 years ago

In case of scalars it is easy to see (maximum number of iterations is a good starting point

  
  
Posted 2 years ago

I think we're good now :) Appreciate the help !!!

  
  
Posted 2 years ago

It'd be great if it just posted to clearml after each epoch is completed and the CSV with the results gets updated . I only care about using the dashboard to track completed progress . I can use my local computers terminal window to monitor current epoch training . No need to send that to clearml every second ;) Results once an hour or so is fine after each completes :)

  
  
Posted 2 years ago

So, might be in the minority here, but seems like capturing stdout and sending that over to clearml via API should be disabled by default. Like I get maybe capturing stderr, but stdout? In a training scenario, that's MILLIONS of API calls just in progress bar indicators, right? Like it might actually be better for the ClearML servers just in general to make the user turn that on if they want it, otherwise we're just blasting your servers. In my case, I did not even know it was sending that over until I got into digging where these API calls were coming from, and saw the CONSOLE tab in clearml that had every single line of stdout captured.
image

  
  
Posted 2 years ago

FYI, found log_stdout in that same setting and default for that was true so set that to false so it would not log all stdout & stderr

  
  
Posted 2 years ago

One single experiment using the code above. I have no idea how many scalars I am sending since as far as I can tell, I am not setting anything specific to define what I am sending over to ClearML, literally first time using YoloV8 or ClearML. Just using the super basic python to run.

  
  
Posted 2 years ago

(Not sure it actually has that information)

  
  
Posted 2 years ago