Reputation
Badges 1
40 × Eureka!to put it a bit differently, I am looking for a way to manually sample and report from and to the optimizer
So I can avoid running unnecessary common heavy setup, for a light weight experiment
AgitatedDove14 , I want multiple machines to access the synced state of the optimizer. which is part of the internals of the optimizer... and then report the results back to the optimizer such that the study object of the optimizer keeps track of the results and the next sample will be aware of all previous studies
AgitatedDove14 the option you mentioned just before sounds much better for me, I must admit I find the name of the method confusing. I came across it before but thought its only relevant for credentials
hi AgitatedDove14 , when I'm using the set_credentials approach does it mean the trains.conf is redundant? if the file doesn't exists on the machine, will it be an issue? if not, so what defaults should I assume for the rest of the values?
AgitatedDove14 The use case is conditional choice of a server config, when ran locally or on the cloud..
I was trying to do exactly as you mentioned setting the environment variable before any trains import but it didn't work (and also its a mess in terms of my code).. I was hoping there is another way to go about it.. if not I'll try to create a minimal reproducible example..
Hi AgitatedDove14 , path to the config file for trains manual execution
I'll try to go with this option, I think its actually perfect for my needs
by WebApp you mean the public online one? I might be confusing stuff
by communication that the artifacts are streamed from the machine running the experiments to the local server?
can it be done "offline" after the experiments run view them in my local server?
I think the latter. the specific use-case I'm talking about is running experiments on one machine, and using a local server on another machine to read the "logs" \ artifacts
yes I will be happy to, its gonna be my first time
yes, I have limited access to the machine that is running the experiment. I can't setup a server there. but I want to collect the results and view them later
I refer to all the info that accessible through the webApp
is there a built in programmatic way to adjust development.default_output_uri ?
great, I'm going to give it a try
Thanks AgitatedDove14 , well if a machine doesn't set the default_output_uri, the default behavior for model checkpoints for example is to just register without uploading. So in the case that the default_output_uri is not defined the offline task folder will not have the artifacts for uploading (not included in the zip file created by offline package).. or am I missing something?
cool, AgitatedDove14 so just to confirm:
To get the desired behavior, uploading artifacts on import_offline_session
The needed action is: setting the development.default_output_uri in the offline machine run (and nothing else?)
edit: tweaked it a little bit for my use-case:is_demo_server = '
http://demoapi.trains.allegro.ai ' in Session.get_api_server_host()
is_server_available = requests.get(Session.get_api_server_host() + "/debug.ping").status_code == 200
TypeError: 'bool' object is not callable
I'm doing this instead
much appreciated, thanks!
thanks SuccessfulKoala55 , the question arose after trying to follow the instructions you attached. it seems that installing a docker on windows 10 Home is somewhat problematic
yes that's what I meant.. this is good, thanks
I have the latest clearml version fresh from PyPi
and latest pre release hydra