Reputation
Badges 1
100 × Eureka!That should do the trick, thanks 🙂
But it still doesn't answer one thing, why when I cloned a previously successful experiment, it failed on git diff?
Something else, If I want to designate only some of the GPUs of a worker, how can I do that?
I am not sure what you mean by verifying the API.
Yes. More exactly I'm gzip.open them but I don't believe it should matter
it uses the api credentials generated by the trains dashboard
SuccessfulKoala55 I found the temp files, they contain the supposedly worker id, which seems just fine
Including this?auto_connect_frameworks={"matplotlib": False}
I think Mushon has told me otherwise a while ago
Could it be because it's running from a draft on an agent?
I understand how this is problematic. This might require more thinking if you guys wish to support this.
My root folder is applicable to my user only. I wish to use a shared trains.conf file, so the trains_config_file can't point to ~/trains.conf sadly
I think so. The issue is that I want to report only a sub set of the images (for example I create an image for every sample in the dataset but I want to display on trains only the top 10 with highest score) but when it's magically logged I have no control over this. What can be done?
Hey, I've gotten this message:
TRAINS Task: overwriting (reusing) task id=24ac52461b2d4cfa9e672d9cd817962c
And I'm not sure why it's reusing the task instead of creating a new task id, the configuration was different although the same python file run. Have you got any idea?
It's important to say that this happens when I have more than like 4 workers but when I run thetrains-agent daemon --stop
With less than 4 workers it works well
I'm confused. Why would that matter what my local code is when trying to replicate an already ran experiment?
Also, between which files is the git diff performed? (I've seen the linediff --git a/.../run.py b/.../run.py
but I'm not sure what's a and what's b in this context)
Hmm, that's quite an uncomfortable syntax. How does this work with wanting to apply an "AND" logic only to tags, in the task_filter?
Actually two machines with shared filesystem
I'll do as Jake says. Thanks :)
how could I configure this in the docker compose?
Edit: the trains-agent points to a different trains.conf config as I wis., I want the dev environment to point to a different location trains.conf as well