DeliciousStarfish67 , are you running your ClearML server on the aws instance?
DeliciousStarfish67 the math is simple - if you want the experiments outputs (in this case specifically - the debug images, uploaded artifacts and models), they simply take up storage space (as png/jpg images and whatever files you uploaded as artifacts or models). If you only want the metrics for each experiments, they are stored in a different location and so will not be affected if you delete fileserver data
I see. I'm guessing you have pretty extensive use in the form of artifacts/debug samples. You can lower the storage usage by deleting some experiments/models though the UI. That should free up some space 🙂
You can always delete the data. Each folder in
/opt/clearml/data/fileserver/ represents the stored outputs of an experiment. If you no longer need the files you can delete them
So your saying its expected and if I can't delete this data the only option is to keep increasing the volume size?
specially /opt/clearml/data/fileserver which is taking 102GB
Where is most of the data concentrated?
It is 112GB
What you're looking for is this:
Also configure your
~/clearml.conf to point to your s3 bucket as well 🙂
Certainly - you do that directly on the clients (SDK, agents)
Thank you guys.
SuccessfulKoala55 is there any way to configure clearml server to save debug images and artifacts to s3?
Any docs you can direct me to?
/opt/clearml/data is taking 112GB
Can you connect directly to the instance? If so, please check how large /opt/clearml is on the machine and then see the folder distribution
Thank you guys!