In theory it should not, in practice you could run out of space while running the experiment itself...
You can always cleanup everything from time to time (maybe worth a flag?)
Yea, is there a guarantee that the clearml-agent will not crash because it did not clean the cache in time?
Can clearml-agent currently detect this?
Hmm you mean will agent clean it self up?
I mean, could my hard drive not become full at some point? Can clearml-agent currently detect this?
This is done in the background while accessing the cache, so it should not have any slowdown effect
Thanks for the answer. So currently the cleanup is done based number of experiments that are cached? If I have a few big experiments, this could make my agents cache overflow?
sdk.storage.cache.size.cleanup_margin_percent
Hi ReassuredTiger98
This is actually future proofing the cache mechanism and allowing it be "smarter" i.e. clean based on cache folder size instead of cache folder entries, this is currently not available
sdk.storage.cache
 parameters for the agent?
For both local execution and with an agent
When are datasets deleted if I run local execution?
When you hit the cache entry limit (100 if I recall). This can also be set in the config file or in runtime from code.
Related to this: How does the local cache/agent cache work? Are the sdk.storage.cache
parameters for the agent? When are datasets deleted from cache? When are datasets deleted if I run local execution?