Reputation
Badges 1
44 × Eureka!Issue open on the clearml-server github https://github.com/allegroai/clearml-server/issues/89 . Thanks for your help.
yes tag is fixed
Thanks a lot I'll check how to do this correctly
I call it like that:logger.clearml_logger.report_image( self.tag, f"{self.tag}_{iteration:0{pad}d}", epoch, image=image ) `` self.tag
is train
or valid
. iteration
is an int for the minibatch in the epoch
Wow, that really nice. I did almost the same code, TrainsLogger
, TrainsSaver
, and all the OutputHandler
. I'll use your version instead and put any comment if I find something.
I made the experiment on the allegroai demo server and it's the same https://demoapp.trains.allegro.ai/projects/fcf3f3fb1013434eb2001870990e5e94/experiments/6ed32a2b5a094f2da47e6967bba1ebd0/output/debugImages . I really think it's a technical limitation to not display all the image am I right ?
Well, I use ignite
and trains-server
with a logging similar to ignite.contrib.handlers
so I will be very happy to test this integration.
It works well I just need to use the task.id
for the task=''
I tough I could use the task.name
it's perfect thanks AgitatedDove14 .
So I see two options:
Reducing the number of image reported (already in our plan) Make on big image per epoch
Even simpler than a github, this code reproduce the issues I have.
SuccessfulKoala55 feel free to roast my errors.
I'm in a process to setup a trains
stack for my projects
and so far it work great thank for this awesome work.
I have made some changes in the codelogger.clearml_logger.report_image( self.tag, f"{self.tag}_{epoch:0{pad}d}", iteration=iteration, image=image ) `` epoch
range is 0-150 iteration
range is 0-100And the error is still there
` General data error (TransportError(503, 'search_phase_execution_exception', 'Trying to create too many buckets. Must be less than or equal to: [10000] but was [10001]. This limit can be set by changing the [search.max_buckets] clus...
Is it better on clearml or clearml-server ?
I have 6 plots with one or 2 metrics. But I have a lot of debug samples.
AgitatedDove14 This is what I expected for the community version. It would really nice to have a read-only link. My use case is when I have a merge request for a model modification I need to provide several informations for our Quality Management System one is to show that the experiment is a success and the model has some improvement over the previous iteration. Obviously I don't want the reviewer to see all my failed experiments 😉 . So yes it should be really nice to have read-only ...
Oh can't wait to see this feature 👀
You need to change a setting in your host machine to make the elasticsearch working.
Is it possible to get all the iteration for one specific metric ? Lets say I have this metric logged. Will I be able to retrieve these series ?
To retrieve metrics from an experiment I use this:
` from trains_agent import APIClient
client = APIClient()
client.events.get_scalar_metric_data(task=task_id, metric="name_of_metric") `Thanks to AgitatedDove14 that point this to me.
Oh sorry, I was thinking about ignite
(I don't know why) not trains. The only way I know is to use a different name when saving. I personnaly use f"{file_name}_{epoch}_{iteration}"
.