PompousBeetle71 , check the n_saved
parameter on the ModelCheckpoint
creation.
SteadyFox10 AgitatedDove14 Thanks, I really did change the name.
SuccessfulKoala55 please post here once the code is available in your pytorch_ignite 🙂
Hi PompousBeetle71 I'm with SteadyFox10 on this one. Unless you choose a file name based on epoch or step , you are literally overwriting the model file, which Trains will reflect. If you use epoch in the filename you will end up with all your models logged by Trains. BTW we are actively working on integration with pytorch ignite, so if you have any suggestions now is the time :)
PompousBeetle71 just making sure, and changing the name solved it?
Oh sorry, I was thinking about ignite
(I don't know why) not trains. The only way I know is to use a different name when saving. I personnaly use f"{file_name}_{epoch}_{iteration}"
.
Well, I use ignite
and trains-server
with a logging similar to ignite.contrib.handlers
so I will be very happy to test this integration.
SteadyFox10 ModelCheckpoint is not for pytorch I think, couldn't find anything like it.