Reputation
Badges 1
44 × Eureka!Oh can't wait to see this feature 👀
Yes this will work I think.
I tried to removes all the images and content from docker with docker-compose down
and docker rmi
, also remove all the content in each directory of /opt/trains/
created by the containers, do you have any idea why this happens?
To retrieve metrics from an experiment I use this:
` from trains_agent import APIClient
client = APIClient()
client.events.get_scalar_metric_data(task=task_id, metric="name_of_metric") `Thanks to AgitatedDove14 that point this to me.
Is it somewhere in the documentation ?
Wow thanks a lot, I'll test it. I didn't even search in trains_agent
documentation.
It works well I just need to use the task.id
for the task=''
I tough I could use the task.name
it's perfect thanks AgitatedDove14 .
Is it possible to get all the iteration for one specific metric ? Lets say I have this metric logged. Will I be able to retrieve these series ?
AgitatedDove14 This is what I expected for the community version. It would really nice to have a read-only link. My use case is when I have a merge request for a model modification I need to provide several informations for our Quality Management System one is to show that the experiment is a success and the model has some improvement over the previous iteration. Obviously I don't want the reviewer to see all my failed experiments 😉 . So yes it should be really nice to have read-only ...
Wow, that really nice. I did almost the same code, TrainsLogger
, TrainsSaver
, and all the OutputHandler
. I'll use your version instead and put any comment if I find something.
SuccessfulKoala55 Also I see that the images name are not ordered naturaly it's display like this image_name_1
, image_name_10
, image_name_2
, and so on, is it possible to have a natural order to see image_name_1
, image_name_2
, etc... ?
I'm in a process to setup a trains
stack for my projects
and so far it work great thank for this awesome work.
I made the experiment on the allegroai demo server and it's the same https://demoapp.trains.allegro.ai/projects/fcf3f3fb1013434eb2001870990e5e94/experiments/6ed32a2b5a094f2da47e6967bba1ebd0/output/debugImages . I really think it's a technical limitation to not display all the image am I right ?
Well, I use ignite
and trains-server
with a logging similar to ignite.contrib.handlers
so I will be very happy to test this integration.
Oh sorry, I was thinking about ignite
(I don't know why) not trains. The only way I know is to use a different name when saving. I personnaly use f"{file_name}_{epoch}_{iteration}"
.
PompousBeetle71 , check the n_saved
parameter on the ModelCheckpoint
creation.