Reputation
Badges 1
9 × Eureka!I am logging directly via the Logger module using the report_scalar method, adding iterations parameter in every call. In my recent runs I am only logging every 30s and this does seem to remove the issue do not see the issue
Hi @<1523701070390366208:profile|CostlyOstrich36> - unfortunately not something that I can easily share I am afraid. I am using the hosted solution btw and I really believe I am just logging way too often, I think about once a second for some 12 hours..
Yes, early in the experiment everything seems fine, then updating begins lagging and coming in chucks before not updating at all anymore
created a new Feature Request: None
FYI @<1523701070390366208:profile|CostlyOstrich36> after a quick search it seems there is already a request for this 🙂 None
@<1523701070390366208:profile|CostlyOstrich36> there was in fact a difference in versions, good suggestion. I was using clearml v1.14.4 and my colleague is on 1.14.1. Downgrading the package to 1.14.1 fixes this for me. Should I open an issue or is this somehow expected behaviour or an already known bug? (I was not able to find a related issues in github?)
@<1523701205467926528:profile|AgitatedDove14> I run the experiments manually for now. It does seem I found the cause of the behaviour, though: I am instantiating an object from my own "tracker" class in my main method that holds from the clearml Task object that actually does the logging. I am doing the instantiation from my configuration via hydra.utils.instantiate
method. So that means import clearml
was not executed before already in my main method annotated with hydra.main
:
...
Are there plans of implementing a simple feature to ignore outliers in scalar plots?
Here is a plot that is not readable because of outliers. I will usually just use log-scale on the y axis, and that works fine in most cases, but sometimes you do not want to mess with the scale and just automatically zoom in on the 'typical' range of the data.
![image](https...