
Reputation
Badges 1
40 × Eureka!and latest pre release hydra
AgitatedDove14 it is happening on an offline network, would be tricky to set it up we will try. so far the errors we observed were either:
Calling upload callback when starting upload: maximum recursion depth exceeded
Or
something like pending for upload (might be because we archived a run while it was uploading)
edit: tweaked it a little bit for my use-case:is_demo_server = '
http://demoapi.trains.allegro.ai ' in Session.get_api_server_host()
is_server_available = requests.get(Session.get_api_server_host() + "/debug.ping").status_code == 200
the ok() call seem to crash
I'm doing this instead
hi AgitatedDove14 , when I'm using the set_credentials approach does it mean the trains.conf is redundant? if the file doesn't exists on the machine, will it be an issue? if not, so what defaults should I assume for the rest of the values?
AgitatedDove14 it does, and it did, but for some reason I couldn't make it to work this way..
I require some additional imports before to infer the config path dynamically.. but even when I stripped down the code and made sure there is no other trains imports anywhere it still didn't work..
AgitatedDove14 the option you mentioned just before sounds much better for me, I must admit I find the name of the method confusing. I came across it before but thought its only relevant for credentials
is there a built in programmatic way to adjust development.default_output_uri ?
Hi AgitatedDove14 the thing I had in mind is having access to trains logger exclusive features like the https://allegro.ai/docs/logger.html#trains.logger.Logger.report_plotly and .report_table for example.. It can be done by explicitly getting the trains default logger, but I was wondered if there is some kind of combined interface to capture properties of both in one object especially because I came across the deprecated TrainsLogger
AgitatedDove14 a single experiment, that is being paused and resumed.
inconsistrncy in yhe reporting: when resuming the 10th epoch for example and doing an extra epoch clearml iteration count is wrong for debug images and monitored metrics.. somehow not for the scalar reporting
great, I'm going to give it a try
AgitatedDove14 The use case is conditional choice of a server config, when ran locally or on the cloud..
I was trying to do exactly as you mentioned setting the environment variable before any trains import but it didn't work (and also its a mess in terms of my code).. I was hoping there is another way to go about it.. if not I'll try to create a minimal reproducible example..
Thanks AgitatedDove14 , well if a machine doesn't set the default_output_uri, the default behavior for model checkpoints for example is to just register without uploading. So in the case that the default_output_uri is not defined the offline task folder will not have the artifacts for uploading (not included in the zip file created by offline package).. or am I missing something?
thanks SuccessfulKoala55 , the question arose after trying to follow the instructions you attached. it seems that installing a docker on windows 10 Home is somewhat problematic
Hi AgitatedDove14 , path to the config file for trains manual execution
Hi AgitatedDove14 , regarding the slider feature, do you know when would it be released?
much appreciated, thanks!
so it sounds like there is no known issue related to this
I have the latest clearml version fresh from PyPi
yes that's what I meant.. this is good, thanks
cool, AgitatedDove14 so just to confirm:
To get the desired behavior, uploading artifacts on import_offline_session
The needed action is: setting the development.default_output_uri in the offline machine run (and nothing else?)
AgitatedDove14 , I want multiple machines to access the synced state of the optimizer. which is part of the internals of the optimizer... and then report the results back to the optimizer such that the study object of the optimizer keeps track of the results and the next sample will be aware of all previous studies
So I can avoid running unnecessary common heavy setup, for a light weight experiment