Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
I Seem To Be Missing Something ... I'Ve Only Got One Task Running To Train A Segmentation Model On My Local Machine, And In A Few Days It'S Hit Over 1.15M Api Calls. It Looks Like It'S Sending Every Single Console Output ... Are There Settings To Control

I seem to be missing something ... I've only got one task running to train a segmentation model on my local machine, and in a few days it's hit over 1.15M API calls. It looks like it's sending every single console output ... are there settings to control what gets logged? I only care about the results from each epoch. I don't need each line of the console posted up ( that's 99% of the API usage right there ). I can't find a way to prevent this and can see each line in the clearml console that's already in my terminal window ( each tick in the progress bar for each epoch seems to be an API call to post that local console output to clearml ). Any tips to stop console from getting sent?

  
  
Posted 2 years ago
Votes Newest

Answers 51


I did notice that the last 24 hours I dropped quite a bit, so my theory that the 140K might have some spillover from previous day might have been correct. Last 24 hours went from 1.24M to 1.32M, so about half as much as the day before, with the same training running.

  
  
Posted 2 years ago

well, in my case, if I am trying to make sure I do not go over the allotted usage, it matters, as I am already hitting the ceiling and I have no idea what is pushing this volume of data

  
  
Posted 2 years ago

FYI, I did not even know to look into this until I logged in and saw that I was being throttled because I had hit my monthly limit with API calls ( on my very first use of your platform ), and my last dozen or so epochs were just not even logged ( also a bummer ). I only had that one model in training, and thought there was no way I sent over a million API requests, so had to figure out where those were coming from, and tracked it down to that STDOUT, and was like ... wait, what?!?! Found that console tab, which I did not even use before, and saw that screenshot I posted, and was like ... well, there's your problem, ha ha

  
  
Posted 2 years ago

Welp, it's been a day with the new settings, and stats went up 140K for API calls

... going to check again tomorrow to see if any of that was spill over from yesterday

140K calls a day, how often are you sending scalars ? how long is it running? how many experiments are running ?

  
  
Posted 2 years ago

Literally all there is, ha ha
image

  
  
Posted 2 years ago

Hmm if this is case, you can add some prints in here:
None
the service/action will tell you what you are sending
wdyt?

  
  
Posted 2 years ago

Just wish I could actually see somewhere what is being sent over API so I could know where to focus my efforts to refine this kind of stuff 😉

  
  
Posted 2 years ago

Maybe ClearML is using tensorboard in ways that I can fine tune? I saw there was a manual way if you were not using tensorboard to send over data, but the videos I saw from your team used this solution when demoing YOLOv8 on YouTube ( there were a few collab videos your team did with theirs, so I just followed their instructions ). But my gut is telling me that might be the issue for the remaining data being sent over that I have no insight into.

  
  
Posted 2 years ago

@<1572395184505753600:profile|GleamingSeagull15> see " Can I control what ClearML automatically logs? " in None (specifically the auto_connect_frameworks argument to Task.init() )

  
  
Posted 2 years ago

Since it's literally something we have to pay for ( which I signed up to do ) I would love to know what drives this cost

  
  
Posted 2 years ago

It was at 1.1M when I shut it down yesterday, and today it's at 1.24M

  
  
Posted 2 years ago

@<1523701087100473344:profile|SuccessfulKoala55> You are my hero !!! This is EXACTLY what I needed !!!

  
  
Posted 2 years ago

( under the None page )

  
  
Posted 2 years ago

each epoch runs about 55 minutes, and that screenshot I posted earlier kind of show the logs for the rest of the info being output, if you wanted to check that out None

  
  
Posted 2 years ago

(Not sure it actually has that information)

  
  
Posted 2 years ago

Is there a place in ClearML that shows Platform Usage? Like, what's actually taking up the API calls?

  
  
Posted 2 years ago

Scary to think how common that might be, could be interesting way to optimize your platform, detect excessive console logging and prompt user to confirm continued usage ( or link to docs on how to disable if they want to stop it )

  
  
Posted 2 years ago

If you do not have a lot of workers, that I would guess console outputs

  
  
Posted 2 years ago

But I will try to set the reduce the number of log reports first

  
  
Posted 2 years ago

I appreciate your help @<1523701205467926528:profile|AgitatedDove14> 🙂

  
  
Posted 2 years ago

well from 2 to 30sec is a factor of 15, I think this is a good start 🙂

  
  
Posted 2 years ago