I made sure to delete them from the archived tab
Hey @<1644147961996775424:profile|HurtStarfish47> , you can use S3 for debug images specifically , see here: https://clear.ml/docs/latest/docs/references/sdk/logger/#set_default_upload_destination but the metrics (everything you report like scalars, single values, histograms, and other plots) are stored in the backend. The fact that you are almost running out of storage could be because of either too many experiments (in which case consider cleaning up your projects) or excessive reporting , in which case check that your code is not overly verbose
And the quota is not cumulative , otherwise we’d run out of storage with the oldest accounts 😃
No ! The way I delete those is like so:
Experiment view -> Reset (one or more) experiment -> expriment is now in draft
Archive experiment
Open archive -> Delete
I get no feedback at all from the operation, but I can see the experiments are no longer available on clearml
Thanks, that is exactly the kind of info I was looking for ! If debug images are counting in the metrics quota that would explain how we reached the limit so quickly.
@<1537605940121964544:profile|EnthusiasticShrimp49> A follow up question about metrics - My pytorch (lightning) experiments are logging to tensorboard and ClearML is automatically picking this up and uploading scalars and debug_images. If I use the set_default_upload_destination
that you mentionned, would that still properly use my URI even though I am not calling Logger.current_logger().report_image
directly ?
Also, I reset than deleted ~80% of the experiment that I had 2 days ago and my metric quota hasn't gone down a MB yet, is it supposed to take longer to update ?
About the first question - yes, it will use the destination URI you set.
About the second point - did you archive or properly delete the experiments?
@<1644147961996775424:profile|HurtStarfish47> did you see any result for the delete operation? Did you get any error or similar?