Reputation
Badges 1
11 × Eureka!just doubles the setup time...
Another (minor) issue is that all the packages that are installed using git+https are cloned and installed twice, immediately one after the other
Otherwise I might as well replace it with my own script that simply sends the configurations list to some random training server, which in turn will execute a version of TimelyPenguin76 's script. And the user will not even be aware of that. From his point of view the experiments will miraculously appear in the UI 🙂
That's an optional plan B
Is it possible, instead of reinstalling the packages, to simply switch to a different existing virtualenv, at the beginning of the task execution?
This is certainly a good way, which we use today.
I am looking for an even simpler way, for less technical people, who could apply this remotely, using the UI.
edit : the issue is less concerning technical level, but rather access to training machines
Manually editing the requirements is indeed a step forward, thanls.
The current challenge is installing with "--no-binary" flag
For example:
h5py==2.10.0 --no-binary=h5py
In this case local changes are made that can impact these libraries.
I am not sure it is a real problem in the trains scenario
I don't mind writing JS or other scripts for that, if there's a hook waiting for me 🙂
interesting, so you actually enqueue a task-generating task, which will eventually, once executed, enqueue all the configuration task (as you proposed earlier).
You probably know you don't have to use docker with your agent. The alternative is to use "ad hoc" virtual environments.
It is a bit tricky, you need to remove the requirements from the queued task configuration. But you can't remove them all, since in that case the agent will use your project's requirements file (if I remember correctly).
I simply kept a single "light weight" requirement in the list, just to avoid looking for my requirements list.
And you need to set a flag for the agent to...