
Reputation
Badges 1
46 × Eureka!Yes, we love the HPO app, and are using it :)
@<1523701087100473344:profile|SuccessfulKoala55> I think youโve been tagged in the PR ๐
This great tool is worth paying for!
Tagging my colleague @<1529271085315395584:profile|AmusedCat74> who needs this with me ๐
Tagging @<1529271085315395584:profile|AmusedCat74> my colleague with whom we ran into this issue.
No problem ๐ Once youโve merged it, what do we need to do to get the updated version please?
(apologies for delay @<1523701087100473344:profile|SuccessfulKoala55> , we got called into meetings. Really appreciate your reactivity!)
And yes, I was also referring to tasks ran by the Autoscaler (potentially via the HPO) app, too.
It was a debugging session. We havenโt yet tried a โStandardโ non-debugging clearml session.
Hi ๐ Anyone having any idea on that one please? Or could point me in the right place or the right person to find out? Thanks for any help!
Oh? Worth trying!
Dang, so unlike screenshots, reports do not survive task deletion :/
Do Pipelines work with Hyperparameter search, and with single training jobs?
Tagging my colleague @<1529271085315395584:profile|AmusedCat74> who made that report.
Does that make sense?
Great, thanks both! I suspect this might need an extra option to be passed via the SDK, to save the iteration scaling at logging time, which the UI can then use at rendering time.
Logging scalars also leverages ClearML automatic logging. One problem is that this automatic logging seems to keep its own internal "iteration" counter for each scalar, as opposed to keeping track of, say, the optimizer's number of steps.
That can be simply fixed on clearML python lib by allowing to set a per-scalar iteration-multiplier.
Thanks @<1523701087100473344:profile|SuccessfulKoala55> ! Any inkling on how soon? Is it days, weeks, or months please? ๐
@<1523701087100473344:profile|SuccessfulKoala55> yes I am ๐ And thanks, looking forward to it!
cc my colleagues @<1529271085315395584:profile|AmusedCat74> and @<1548115177340145664:profile|HungryHorse70>
@<1523701070390366208:profile|CostlyOstrich36> Any idea please? We could use our 8xA100 as 8 workers, for 8 single-gpu jobs running faster than on a single 1xV100 each.