Reputation
Badges 1
40 × Eureka!cool, AgitatedDove14 so just to confirm:
To get the desired behavior, uploading artifacts on import_offline_session
The needed action is: setting the development.default_output_uri in the offline machine run (and nothing else?)
AgitatedDove14 it is happening on an offline network, would be tricky to set it up we will try. so far the errors we observed were either:
Calling upload callback when starting upload: maximum recursion depth exceeded
Or
something like pending for upload (might be because we archived a run while it was uploading)
Hi AgitatedDove14 , regarding the slider feature, do you know when would it be released?
Thanks! I'll have a look and see if I have some useful ideas
great, I'm going to give it a try
the ok() call seem to crash
to put it a bit differently, I am looking for a way to manually sample and report from and to the optimizer
"does not support running with no server connection." this is what I was afraid of..I'll need to figure out if I can use trains at all 😞
by WebApp you mean the public online one? I might be confusing stuff
yes I will be happy to, its gonna be my first time
yes, I have limited access to the machine that is running the experiment. I can't setup a server there. but I want to collect the results and view them later
thanks SuccessfulKoala55 , the question arose after trying to follow the instructions you attached. it seems that installing a docker on windows 10 Home is somewhat problematic
if I don't have internet connection on the other machine, can I just copy the artifacts and transfer them to my local machine?
by communication that the artifacts are streamed from the machine running the experiments to the local server?
can it be done "offline" after the experiments run view them in my local server?
TypeError: 'bool' object is not callable
AgitatedDove14 , I want multiple machines to access the synced state of the optimizer. which is part of the internals of the optimizer... and then report the results back to the optimizer such that the study object of the optimizer keeps track of the results and the next sample will be aware of all previous studies
and latest pre release hydra
I have the latest clearml version fresh from PyPi
I think the latter. the specific use-case I'm talking about is running experiments on one machine, and using a local server on another machine to read the "logs" \ artifacts
much appreciated, thanks!
So I can avoid running unnecessary common heavy setup, for a light weight experiment
edit: tweaked it a little bit for my use-case:is_demo_server = ' http://demoapi.trains.allegro.ai ' in Session.get_api_server_host()is_server_available = requests.get(Session.get_api_server_host() + "/debug.ping").status_code == 200
AgitatedDove14 a single experiment, that is being paused and resumed.
inconsistrncy in yhe reporting: when resuming the 10th epoch for example and doing an extra epoch clearml iteration count is wrong for debug images and monitored metrics.. somehow not for the scalar reporting
is there a built in programmatic way to adjust development.default_output_uri ?
Hi AgitatedDove14 the thing I had in mind is having access to trains logger exclusive features like the https://allegro.ai/docs/logger.html#trains.logger.Logger.report_plotly and .report_table for example.. It can be done by explicitly getting the trains default logger, but I was wondered if there is some kind of combined interface to capture properties of both in one object especially because I came across the deprecated TrainsLogger
and I will also be happy to see if I can contribute maybe to this specific feature or maybe others