Glad I got that sorted. I was OK being a paying customer, but gettin overage charges for that console stuff would have been a bummer if we had not figured it out. Next month things should be back to normal 😉
I had no idea it was going to do that and sent your servers over 1.4M API hits unintentionally
Yeah, that is way too much, I think relates to the frequency it updates the console 😞
Scary to think how common that might be, could be interesting way to optimize your platform, detect excessive console logging and prompt user to confirm continued usage ( or link to docs on how to disable if they want to stop it )
In future collab community videos and sample source for YoloV8, might be worthwhile to call that out as something folks might want to turn off unless they need it :) . Like I mentioned, I had no idea it was going to do that and sent your servers over 1.4M API hits unintentionally : (
Math checks out that if I was generating around 140K a day, and this had been running for 9 days, it had 1.2M when I caught it . So I think the next day after I shut it down I was seeing previous days numbers before shut down added . And another 24 hours it barely changed, so ya, it was 100% the stdout logging .
I think we're good now :) Appreciate the help !!!
Actually looking at the counts today, they've barely changed. So I think this actually fixed it, and was just that the counts are only updated daily so I needed to get 48 hours out from when I made the change to see clean results to assure no spill over counts from previous days.
Ya, sorry, I meant that if you needed more info on what was being run, it was in that screenshot ( showed instances/epochs/batch size, etc ) . But yes, it's since been disabled .
each epoch runs about 55 minutes, and that screenshot I posted earlier kind of show the logs for the rest of the info being output, if you wanted to check that out
I thought you disabled the stdout log. no?
Maybe ClearML is using
tensorboard
in ways that I can fine tune? I
You can open your TB and see, every report there is logged into clearml
Maybe ClearML is using tensorboard
in ways that I can fine tune? I saw there was a manual way if you were not using tensorboard
to send over data, but the videos I saw from your team used this solution when demoing YOLOv8 on YouTube ( there were a few collab videos your team did with theirs, so I just followed their instructions ). But my gut is telling me that might be the issue for the remaining data being sent over that I have no insight into.
each epoch runs about 55 minutes, and that screenshot I posted earlier kind of show the logs for the rest of the info being output, if you wanted to check that out None
I am running this on a 3090 GPU locally, just been letting it run for about two weeks now I think. Just have the one GPU, ha ha. It's at epoch 368 out of the 1,000 I have it set to cap out on ( if it does not hit the default YOLO "patience" limit of 50 before then and self terminate ).
I did notice that the last 24 hours I dropped quite a bit, so my theory that the 140K might have some spillover from previous day might have been correct. Last 24 hours went from 1.24M to 1.32M, so about half as much as the day before, with the same training running.
My training is on roughly 50 classes as a subset of the Open Images Dataset for Segmentation
One single experiment using the code above. I have no idea how many scalars I am sending since as far as I can tell, I am not setting anything specific to define what I am sending over to ClearML, literally first time using YoloV8 or ClearML. Just using the super basic python to run.
Welp, it's been a day with the new settings, and stats went up 140K for API calls
... going to check again tomorrow to see if any of that was spill over from yesterday
140K calls a day, how often are you sending scalars ? how long is it running? how many experiments are running ?
It was at 1.1M when I shut it down yesterday, and today it's at 1.24M
Welp, it's been a day with the new settings, and stats went up 140K for API calls 😢 ... going to check again tomorrow to see if any of that was spill over from yesterday
FYI, found log_stdout
in that same setting and default for that was true
so set that to false
so it would not log all stdout & stderr
Came to ClearML since it had slick dashboard and showed me the info that mattered. Loved that I could share the results of each epoch so we could make sure things were headed in the correct direction.
I appreciate your help @<1523701205467926528:profile|AgitatedDove14> 🙂
this one, right ? report_period_sec
in ~/clearml.conf
correct ?
Would love to just cap it at a fixed amount for a month for API calls.
Try the timeout configuration, I think this shoud solve all your issues, and will be fairly easy to set for everyone
Just wish I could actually see somewhere what is being sent over API so I could know where to focus my efforts to refine this kind of stuff 😉
But I will try to set the reduce the number of log reports first
hmmm, this is just a personal project, honestly was just hoping this would let me take the results of each epoch and put it in a central dashboard. Having this generate 1M+ api calls and only being like 1/4 of the way though training is a bit much. Current pricing is $1/100K API calls at the PRO tear, which I am on ... so it would be like another $50 just in API calls at this pace 😞 Would love to just cap it at a fixed amount for a month for API calls.
I guess last followup question, is there a way to cap costs?
Scale tier ? (I know it is not per usage, but it is probably more than 15$ per user 🙂 )
Hmm if this is case, you can add some prints in here:
None
the service/action will tell you what you are sending
wdyt?
I guess last followup question, is there a way to cap costs? Like if this is running at this scale, I am not sure I can use ClearML for my purpose if I am just going to get overage charged repeatedly ( which I am already looking like I will be doing ).