Thanks for the quick responses and support too! 🙂
(It would be nice to have all the Pypi releases tagged in github btw)
The experiment finished completely this time again
With the RC version or the latest ?
The experiment finished completely this time again
"Updates a few seconds ago"
That just means that the process is not dead.
Yes that seemed to be stuck 😞
Any chance you can verify with the RC version?
I'll try to dig into the commits, maybe I can come up with an explanation ...
Alright, experiment finished properly (all models uploaded). I will restart it to check again, but seems like the bug was introduced after that
Which commit corresponds to RC version? So far we tested with latest commit on master (9a7850b23d2b0e1f2098ab051de58ce806143fff)
And thanks again, I really appreciate testing it!
JitteryCoyote63 fix pushed to master, let me know if it passes...
I just tested the master with https://github.com/jkhenning/ignite/blob/fix_trains_checkpoint_n_saved/examples/contrib/mnist/mnist_with_trains_logger.py on the latest ignite master and Trains, it passed, but so did the previous commit...
I was unable to reproduce, but I added a few safety checks. I'll make sure they are available on the master in a few minutes, could maybe rerun after?
To be honest, I'm not sure I have a good explanation on why ... (unless on some scenarios an exception was thrown and caught silently and caused it)
Yes, it is supposed to run for 200 epochs
Just checked, it did pass, training finished and all 200 models saved 🙂
BTW:
Just making sure, 74 was not supposed to be the last checkpoint (in other words it is not stuck on leaving the training process, but actually in the middle)
(It would be nice to have all the Pypi releases tagged in github btw)
I wanted to say, we listen ... and point to the tag , but for some reason it was not pushed LOL.
Okay there now:
https://github.com/allegroai/trains/tree/0.15.1rc0
Seems to works, I started a last one to confirm!
JitteryCoyote63 while it's running, could you give me a few details on the setup, maybe I can reproduce it.
Is it using pytorch distributed ?
Are all models uploaded to S3 ?
etc.
Not using pytorch distributed, all models are uploaded to s3 yes
I started a last one to confirm!
You mean a second run, just to make sure ?