might be a feature request then, as ya, having transparency into something we are charged for would be nice. At this point, I have zero idea what is driving this usage and just want to make sure the costs for training do not bloat too much. I personally am just using ClearML as a central dashboard for a few people. I don't need it to be live data, I just need a rough overview of progress. Even if it only posted updates to ClearML once an hour, that is honestly fine.
In case of scalars it is easy to see (maximum number of iterations is a good starting point
If you do not have a lot of workers, that I would guess console outputs
Since it's literally something we have to pay for ( which I signed up to do ) I would love to know what drives this cost
well, in my case, if I am trying to make sure I do not go over the allotted usage, it matters, as I am already hitting the ceiling and I have no idea what is pushing this volume of data
(Not sure it actually has that information)
is number of calls performed, not what those calls were.
oh, yes this is just a measure of how many API calls are sent.
It does not really matter which ones
I would love to be able to fine tune this as needed, but in my profile I only see a Billings & Usage, and it states at the top that "Usage data is updated once every day" ... and even then, all the shows under "Platform Usage" is number of calls performed, not what those calls were.
I'm not sure on the frequency it updates though
Under your profile you should be able to see it
well from 2 to 30sec is a factor of 15, I think this is a good start 🙂
Is there a place in ClearML that shows Platform Usage? Like, what's actually taking up the API calls?
Thanks, will do. Heck, for my use case, I only need like once every 10 minutes.
Hi @<1572395184505753600:profile|GleamingSeagull15>
Try adjusting:
None
to 30 sec
It will reduce the number of log reports (i.e. API calls)
FYI, I did not even know to look into this until I logged in and saw that I was being throttled because I had hit my monthly limit with API calls ( on my very first use of your platform ), and my last dozen or so epochs were just not even logged ( also a bummer ). I only had that one model in training, and thought there was no way I sent over a million API requests, so had to figure out where those were coming from, and tracked it down to that STDOUT, and was like ... wait, what?!?! Found that console tab, which I did not even use before, and saw that screenshot I posted, and was like ... well, there's your problem, ha ha
So, might be in the minority here, but seems like capturing stdout and sending that over to clearml via API should be disabled by default. Like I get maybe capturing stderr, but stdout? In a training scenario, that's MILLIONS of API calls just in progress bar indicators, right? Like it might actually be better for the ClearML servers just in general to make the user turn that on if they want it, otherwise we're just blasting your servers. In my case, I did not even know it was sending that over until I got into digging where these API calls were coming from, and saw the CONSOLE tab in clearml that had every single line of stdout captured.
@<1523701087100473344:profile|SuccessfulKoala55> You are my hero !!! This is EXACTLY what I needed !!!
@<1572395184505753600:profile|GleamingSeagull15> see " Can I control what ClearML automatically logs? " in None (specifically the auto_connect_frameworks
argument to Task.init()
)
It'd be great if it just posted to clearml after each epoch is completed and the CSV with the results gets updated . I only care about using the dashboard to track completed progress . I can use my local computers terminal window to monitor current epoch training . No need to send that to clearml every second ;) Results once an hour or so is fine after each completes :)