Reputation
Badges 1
40 × Eureka!is there a built in programmatic way to adjust development.default_output_uri ?
AgitatedDove14 a single experiment, that is being paused and resumed.
inconsistrncy in yhe reporting: when resuming the 10th epoch for example and doing an extra epoch clearml iteration count is wrong for debug images and monitored metrics.. somehow not for the scalar reporting
I'm doing this instead
yes that's what I meant.. this is good, thanks
and I will also be happy to see if I can contribute maybe to this specific feature or maybe others
TypeError: 'bool' object is not callable
to put it a bit differently, I am looking for a way to manually sample and report from and to the optimizer
edit: tweaked it a little bit for my use-case:is_demo_server = '
http://demoapi.trains.allegro.ai ' in Session.get_api_server_host()
is_server_available = requests.get(Session.get_api_server_host() + "/debug.ping").status_code == 200
Hi AgitatedDove14 , regarding the slider feature, do you know when would it be released?
AgitatedDove14 The use case is conditional choice of a server config, when ran locally or on the cloud..
I was trying to do exactly as you mentioned setting the environment variable before any trains import but it didn't work (and also its a mess in terms of my code).. I was hoping there is another way to go about it.. if not I'll try to create a minimal reproducible example..
AgitatedDove14 the option you mentioned just before sounds much better for me, I must admit I find the name of the method confusing. I came across it before but thought its only relevant for credentials
by WebApp you mean the public online one? I might be confusing stuff
by communication that the artifacts are streamed from the machine running the experiments to the local server?
can it be done "offline" after the experiments run view them in my local server?
Hi AgitatedDove14 , path to the config file for trains manual execution
I refer to all the info that accessible through the webApp