Reputation
Badges 1
44 × Eureka!Thanks a lot I'll check how to do this correctly
SuccessfulKoala55 Also I see that the images name are not ordered naturaly it's display like this image_name_1 , image_name_10 , image_name_2 , and so on, is it possible to have a natural order to see image_name_1 , image_name_2 , etc... ?
I tried to removes all the images and content from docker with docker-compose down and docker rmi , also remove all the content in each directory of /opt/trains/ created by the containers, do you have any idea why this happens?
I'll try to make a code that reproduce this behavior and post it on github is it fine ? that way you could check if I'm the problem (which is really likely) 😛
I'm in a process to setup a trains stack for my projects and so far it work great thank for this awesome work.
Oh sorry, I was thinking about ignite (I don't know why) not trains. The only way I know is to use a different name when saving. I personnaly use f"{file_name}_{epoch}_{iteration}" .
Is it somewhere in the documentation ?
Yes this will work I think.
Oh right click! Nice, I don't even right click on web page usually, that pretty nice. Thanks.
I have made some changes in the codelogger.clearml_logger.report_image( self.tag, f"{self.tag}_{epoch:0{pad}d}", iteration=iteration, image=image ) `` epoch range is 0-150 iteration range is 0-100And the error is still there
` General data error (TransportError(503, 'search_phase_execution_exception', 'Trying to create too many buckets. Must be less than or equal to: [10000] but was [10001]. This limit can be set by changing the [search.max_buckets] clus...
I made the experiment on the allegroai demo server and it's the same https://demoapp.trains.allegro.ai/projects/fcf3f3fb1013434eb2001870990e5e94/experiments/6ed32a2b5a094f2da47e6967bba1ebd0/output/debugImages . I really think it's a technical limitation to not display all the image am I right ?
Oh can't wait to see this feature 👀
AgitatedDove14 This is what I expected for the community version. It would really nice to have a read-only link. My use case is when I have a merge request for a model modification I need to provide several informations for our Quality Management System one is to show that the experiment is a success and the model has some improvement over the previous iteration. Obviously I don't want the reviewer to see all my failed experiments 😉 . So yes it should be really nice to have read-only ...
Something like 100 epoch with a least more than 100 images par epoch reported.
Issue open on the clearml-server github https://github.com/allegroai/clearml-server/issues/89 . Thanks for your help.
Even simpler than a github, this code reproduce the issues I have.
yes tag is fixed
Is it better on clearml or clearml-server ?
SuccessfulKoala55 feel free to roast my errors.
So I see two options:
Reducing the number of image reported (already in our plan) Make on big image per epoch
I call it like that:logger.clearml_logger.report_image( self.tag, f"{self.tag}_{iteration:0{pad}d}", epoch, image=image ) `` self.tag is train or valid . iteration is an int for the minibatch in the epoch
