Reputation
Badges 1
7 × Eureka!If you could provide the specific task ID then it could fetch the training data and study from the previous task and continue with the specified number of trainings. The reason that being able to continue from a past study would be useful is that the study provides a base for pruning and optimization of the task. The task would be stopped by aborting when the gpu-rig that it is using is needed or the study crashes.
My current use case is just research for training pytorch models.
As a follow up to this it seems that the study data must be fetched from a remote SQL server. the "storage" arg. It would be amazing to be able to store the study as an artefact in the clearml task. AgitatedDove14
Ok, that looks good. It would be good to have an easier restart functionality as from the looks of things its a couple of layer deep. I'll let you know how if I manage it, might be useful.
Also, I was wondering if there was a way to sort out folders and create a sub folder with the tasks with adjusted hyperparameters so that we can access them but they don't take up quite so much space.
Thanks, this was really helpful. Would be a good thing to have on the hyperparameter tuning page in the docs if you can add it.