EnviousStarfish54 VivaciousPenguin66 So for random seed we have a way to save it so this should be possible and reproducible.
As for execution progress I totally agree. We do have our pipelining solution but I see it's very common to use us only for experiment tracking and use other tools for pipelining as well.
Not trying to convert anyone but may I ask why did you choose to use another tool and not the built-in pipelining feature in ClearML? Anything missing? Or did you just build the infra already and didn't want to convert? Or something else?
A bit of advertisement here (I don't feel bad as it IS the ClearML slack 😄 ), we tried to design pipelines so that DS could write it themselves, and then execute with agents (which should abstract the Devops setup). I'd like to know if and where did we fail in that mission 😮 .
As for cleanup, Doesn't a stage in the pipeline that removes unnecessary artifacts at the end of the run make sense? Or some service that runs once a week and anything older than X days, it removes the associated data from storage?