Reputation
Badges 1
43 × Eureka!the images do not show up in debug_samples. How can I check what is wrong?
AgitatedDove14 FYI: I am using pytorch
apiserver logs were clean, only 200s there
and the experiment did not produce any logs, shall I enable some debug flag?
that's ok, I think that the race condition will be a non-issue. Thanks for checking!
AgitatedDove14 I do not want to push you in any way, but if you could give me an estimate of the slurm glue code, that would be helpful. I should have a local installation of the trains server to experiment with next week.
AgitatedDove14 if I use report_image can I get a URL to it somehow?
some piece of html+js code that you can add that governs how to visualize debug_samples from experiments that are already finished, think of adding an overlay of two types of images post factum
yes, this is what I found as well
Not sure yet, I will get back to you on this later, in 1-2 weeks, thanks.
No, they were not SuccessfulKoala55
sure, we can deal with the drivers
AgitatedDove14 thanks, that will be helpful!
that was quick, thanks!
AgitatedDove14 going back to the slurm subject, I have local trains installed on the cluster with slurm so I am ready to test. At the same time I was thinking whether a simple solution would do the job:
a) [scale up agents] monitor the trains queue, if there is something that was not started for some amount of time, and the number of agents is not yet at the maximum, then add an agent,
b) [scale down agents] if all the tasks are running and there are idle agents, kill an idle agent.
Or do yo...
AgitatedDove14 I looked at the K8s glue code, having something similar but for SLURM would be great!
have some kind of an add-on not as a widget but in an external system (this is not the preffered way of course)
so far everything works, the only problem I can think of is a race condition, but I will probably ignore it, which happens in the following scenario:
a) a worker finishes its current run, turns into an idle state,
b) my script scrapes the status of the worker, which is idle,
c) a new task is enqueued and picked by the worker,
d) the worker is killed after it managed to pull a task from the queue, so the task will be cancelled as well.
For the images themselves, you can get heir urls
how can I do it?
AgitatedDove14 thanks for the additional information:
yes, the report_image problem was resolved after I reordered dimensions in the tensor. is there an advantage in using tensorboard over your reporting? html reporting looks powerfull, can one inject some javascript inside?
yes, but the local output was completely empty
since I am using the demo server, I should make sure that in the configuration file the images will be uploaded to the appropriate server, right? Can you please point me to the proper line in the config file?
But I can't do this from the web ui, can I?
AgitatedDove14 I meant the following scenario:
trains-agents will be running as slurm jobs (possibly for a very long time), there is a program running on an access-node of the cluster (where no computation happens, but from where one can submit jobs to slurm), this program check is there are not enough or too many agents running and adjusts them by cancelling them or spinning new ones.
thanks, next time I will provide you will all the logs