The easiest is to pass an entire trains.conf
file
AgitatedDove14 trainsConfig is totally optional and you can put the config file itself in it.. e.g.:trainsConfig: |- sdk { aws { s3 { key: "" secret: "" region: "" credentials: [ { host: "minio.minio:9000" key: "DEMOaccessKey" secret: "DEMOsecretKey" multipart: false secure: false region: "" } ] } boto3 { pool_connections: 512 max_multipart_concurrency: 16 } } development { default_output_uri: "s3://minio.minio:9000/trains/" } }
Notice that the StorageManager has default configuration here:
https://github.com/allegroai/trains/blob/f27aed767cb3aa3ea83d8f273e48460dd79a90df/docs/trains.conf#L76
Then a per bucket credentials list, with detials:
https://github.com/allegroai/trains/blob/f27aed767cb3aa3ea83d8f273e48460dd79a90df/docs/trains.conf#L81
at that point we define a queue and the agents will take care of trainingΒ
This is my preferred way as well :)
about helm chart, yes, I mean adding capability of managing a configmap qith config file. If it's interesting I can raise a PR otherwise I need to fork π
Hi Martin π ok got it but now the question: how I can pass this to the train-agent deployed with Helm chart?
ok so it' time to create a configmap with the entire file π
I think this is great! That said, it only applies when you are spining agents (the default helm is for the server). So maybe we need another one? or an option?
It is way too much to pass on env variable π
https://github.com/allegroai/trains-server-k8s/pull/13 i think you will like it π
as usual it starts small and after 5 mins discussion is getting challenging π I love this stuff... let me think a bit about it I will get back to you asap on this.
Yes π
BTW: do you guys do remote machine development (i.e. Jupyter / vscode-server) ?
JuicyFox94
NICE!!! this is exactly what I had in mind.
BTW: you do not need to put the default values there, basically it reads the defaults from the package itself trains-agent/trains and uses the conf file as overrides, so this section can only contain the parts that are important (like cache location credentials etc)
our data engineer directly write code in pycharm and test it on the fly with brakpoints. when good we simply commit in git and we set a tag "prod ready"
at that point we define a queue and the agents will take care of training π
an implementation of this kind is interesting for you or do you suggest to fork? I mean, I don't want to impact your time reviewing
an implementation of this kind is interesting for you or do you suggest to fork
You mean adding a config map storing a default trains.conf for the agent?
an implementation of this kind is interesting for you or do you suggest to fork
You mean adding a config map storing a default trains.conf for the agent?
Hi JuicyFox94
you pointed to exactly the issue π
In your trains.conf
https://github.com/allegroai/trains/blob/f27aed767cb3aa3ea83d8f273e48460dd79a90df/docs/trains.conf#L94