Reputation
Badges 1
108 × Eureka!What version of ClearML server are you using?
I just checked the clearml.conf and I'm not specifying any version of python for the agents.
I think the PR is a good idea. I read the contribution guidelines. It talks about referencing an issue. Did you want me to duplicate this issue on the repo or is it enough to link to this thread?
Yeah, it's because it's just hooking into the save operation and capturing the output, regardless of the parent call.
Depending on the framework you're using it'll just hook into the save model operation. Every time you save a model, which will probably happen every epoch for some subset of the training. If you want to do it with the existing framework you could change the checkpoint so that it only clones the best model in memory and saves the write operation for last. The risk with this is if the training crashes, you'll lose your best model.
Optionally, you could also disable the ClearML integration with...
@<1523701205467926528:profile|AgitatedDove14> Then it isn't working at intended. To test it I started the scheduler and set a simple dead man snitch process to run once a day. In the web-app (on your site app.cleearml.ml), when looking at the scheduler process in the DevOps section, I was able to see a configuration file under artifacts but it was not as all obvious how you'd change that because it wasn't part of the configuration section, it was just an artifact. So I thought maybe it was b...
Thanks for the reply @<1523701070390366208:profile|CostlyOstrich36> !
It says in the documentation that:
Add a folder into the current dataset. calculate file hash, and compare against parent, mark files to be uploaded
It seems to recognize the dataset as another version of the data but doesn't seem to be validating the hashes on a per file basis. Also, if you look at the photo, it seems like some of the data does get recognized as the same as the prior data. It seems like it's the correct...
Hi Again Eugen,
If I use the hyperparameter tool in ClearML, won't that create a different experiment for every step of the hyperparameter-optimizer? So this will be run across experiments. I could do something with pipelines but since the metrics are already available in the ClearML hyperparameter/metric tables I thought it would make sense to be able to plot against those values.