Reputation
Badges 1
46 × Eureka!Tagging @<1529271085315395584:profile|AmusedCat74> my colleague with whom we ran into this issue.
This great tool is worth paying for!
Is the doc on GitHub so we can copy that into a PR?
Does that make sense?
Thanks @<1523701070390366208:profile|CostlyOstrich36> !
- I hadnโt found the multiple-resources within the same autoscaler. Could you point me to the right place please? Are they all used interexchangeably based upon availability, rather than based on job needs?
- We thought of using separate queues (we do that for CPU vs GPU queues), but having ClearML automatically dispatch to the right based on a job specification would be more flexible. (for example, we could then think to dispath dynami...
Tagging my colleague @<1529271085315395584:profile|AmusedCat74> who made that report.
@<1523701087100473344:profile|SuccessfulKoala55> I think youโve been tagged in the PR ๐
@<1523701087100473344:profile|SuccessfulKoala55> yes I am ๐ And thanks, looking forward to it!
(do you welcome PRs?)
Brilliant, thanks a lot for the answer Jake, much appreciated and clearer!
@<1529271085315395584:profile|AmusedCat74> @<1548115177340145664:profile|HungryHorse70> here we have the answer :)
Hi ๐ Anyone having any idea on that one please? Or could point me in the right place or the right person to find out? Thanks for any help!
Thanks. That would be very helpful. Some of our graphs are logged by optimization steps, whereas some by epochs, so having all called "Iterations" is not ideal.
Logging scalars also leverages ClearML automatic logging. One problem is that this automatic logging seems to keep its own internal "iteration" counter for each scalar, as opposed to keeping track of, say, the optimizer's number of steps.
That can be simply fixed on clearML python lib by allowing to set a per-scalar iteration-multiplier.
Happy to jump on a call if easier to make sense of it :)
Dang, so unlike screenshots, reports do not survive task deletion :/
OK, so no way to have an automatic dispatch to different, correctly-sized instances, itโs only achievable by submitting to different queues?
Yes, exactly. Here is the logical sense it makes: I have plots where iterations represent different units: for some these plots iterations (call them A) are optimization steps, while for others (call them B) they are evaluation iterations, occuring every N optimization steps. I would like to either:
- Change the X label so these different plots do not have the same label when they represent different things.
- Or, even better, keep the unique "iterations" label but be able to change how I lo...
Yes, we love the HPO app, and are using it :)
Oh? Worth trying!
(actually, that might even be feasible without touching the UI, depending how the plot is rendered, but I'll check)
Great, thanks both! I suspect this might need an extra option to be passed via the SDK, to save the iteration scaling at logging time, which the UI can then use at rendering time.
The problem with logging as a 2D plot is we lose the streaming: if I understand correctly the documentation, Logger.current_logger().report_scatter2d logs a single, frozen 2D plot when you know the full X and Y data. And you would do that at each evaluation step.
Logging scalars allows to log a growing time series, i.e. add to the existing series/plot at every "iteration", thus being able to monitor the progress over time in one single plot. It's a much more logical setting.
cc my colleagues @<1529271085315395584:profile|AmusedCat74> and @<1548115177340145664:profile|HungryHorse70>