Reputation
Badges 1
25 × Eureka!For example, I plot this using the same data:
AgitatedDove14 No, look at this:
This is the figure shown in clearml:
AgitatedDove14 Updated the Trains version to the mentioned version but it still stops. Regarding exceptions from subprocesses, torchvision doesn't show me any exception that I can handle.
AgitatedDove14 That would work ofcourse. I wonder what is the best practice to do such things because comparing experiments using graphs is very useful. I think it is a nice to have feature.
I don't do anything beside registering the clearML logger. After the logger is registered, I just plot my histogram using matplotlib
Thanks CostlyOstrich36 . As I understand, I should set is to False , right?
CostlyOstrich36 Yes, I see all these files as models in the UI. These files are the data for training my model. I want to mute all these messages.
The clearml plot looks like this:
Ok. This should be the same graph which is automatically logged to clearml after plotting with matplotlib
The figure saved with matplotlib looks like this:
Thanks for your support. My OS is Ubuntu 18.04.5 LTS and the trains version is 0.16.0. I can't run this code right now as my machine runs some other heavy stuff right now, but I'll try reproducing this as soon as It finishes
AgitatedDove14 Hey, I just reproduce this. Whenever it happens, I also get a warning from torchvision:/home/koe1tv/anaconda3/envs/torch/lib/python3.7/site-packages/torchvision/io/video.py:105: UserWarning: The pts_unit 'pts' gives wrong results and will be removed in a follow-up version. Please use pts_unit 'sec'.Unfortunately, I can't suppress this warning because I don't have access to the parameter mentioned in the warning.
For example, here are the two last log lines from my process:2020-09-11 18:34:50 /home/koe1tv/anaconda3/envs/torch/lib/python3.7/site-packages/torchvision/io/video.py:105: UserWarning: The pts_unit 'pts' gives wrong results and will be removed in a follow-up version. Please use pts_unit 'sec'.--2020-09-11 18:34:52 2020-09-11 08:34:52,109 - trains.Task - WARNING - ### TASK STOPPED - USER ABORTED - STATUS CHANGED ###
However, this works perfectly with other plots, but not with histograms
~/trains.confapi { web_server: http://** api_server: http://** files_server: http://** credentials { "access_key" = ** "secret_key" = ** } }Censored the addresses and the keys 😄
Hey SuccessfulKoala55 thanks for your reply. I pasted:
AgitatedDove14 I'm looking for something else. Basically, I want to compare the accuracy with respect to another variable. I can get the accuracy of each experiment (see picture). Additionally, I can log the other variable (name it T) and show it as a scalar per experiment. However, I don't see an option How I can simply plot the graph of accuracy versus T, which is different for every experiment. The solution that loops thru all tasks might work anyway.
by "disable the logger" I mean not using trains at all, just in order to make sure the process doesn't stop by itself.
AlertBlackbird30 Hi, yes, I attach here the screen shots of the results
This is ok, but the histogram is warped. Looks like the axis are switched or something



