Reputation
Badges 1
44 × Eureka!Well, I use ignite
and trains-server
with a logging similar to ignite.contrib.handlers
so I will be very happy to test this integration.
PompousBeetle71 , check the n_saved
parameter on the ModelCheckpoint
creation.
now that I know I could use the right click I'll use it like in google drive etc.
Oh right click! Nice, I don't even right click on web page usually, that pretty nice. Thanks.
Wow, that really nice. I did almost the same code, TrainsLogger
, TrainsSaver
, and all the OutputHandler
. I'll use your version instead and put any comment if I find something.
Maybe just adding the use case in the document or elsewhere like video idk could be useful.
It works well I just need to use the task.id
for the task=''
I tough I could use the task.name
it's perfect thanks AgitatedDove14 .
Is it better on clearml or clearml-server ?
I made the experiment on the allegroai demo server and it's the same https://demoapp.trains.allegro.ai/projects/fcf3f3fb1013434eb2001870990e5e94/experiments/6ed32a2b5a094f2da47e6967bba1ebd0/output/debugImages . I really think it's a technical limitation to not display all the image am I right ?
Wow thanks a lot, I'll test it. I didn't even search in trains_agent
documentation.
Is it possible to get all the iteration for one specific metric ? Lets say I have this metric logged. Will I be able to retrieve these series ?
I'll give a try with the virtualenv solution. If I have any question I'll ask in this thread. Thanks a lot.
AgitatedDove14 This is what I expected for the community version. It would really nice to have a read-only link. My use case is when I have a merge request for a model modification I need to provide several informations for our Quality Management System one is to show that the experiment is a success and the model has some improvement over the previous iteration. Obviously I don't want the reviewer to see all my failed experiments 😉 . So yes it should be really nice to have read-only ...
Yes this will work I think.
Oh can't wait to see this feature 👀
To retrieve metrics from an experiment I use this:
` from trains_agent import APIClient
client = APIClient()
client.events.get_scalar_metric_data(task=task_id, metric="name_of_metric") `Thanks to AgitatedDove14 that point this to me.