Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
RattySeagull0
Moderator
3 Questions, 21 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

21 × Eureka!
0 Votes
7 Answers
545 Views
0 Votes 7 Answers 545 Views
3 years ago
0 Votes
17 Answers
552 Views
0 Votes 17 Answers 552 Views
3 years ago
0 Votes
17 Answers
527 Views
0 Votes 17 Answers 527 Views
3 years ago
0 Hi Everyone, I'M Trying To Execute Trains-Agent In Docker Mode With Conda As Package Manager, Is It Supported? I Tried To Work With Nvidia/Cuda:10.0-Runtime-Ubuntu18.04 And Got The Error "Trains_Agent: Error: Error: Package Manager "Conda" Selected, But '

I did, and it installed the docker with python 3.6 (I think because the parameter of agent.default_python is 3.6 by default)
is it possible to change this parameter when I create the experiment? (I want to work with python 3.7)

3 years ago
0 Hi Everyone, I'M Trying To Execute Trains-Agent In Docker Mode With Conda As Package Manager, Is It Supported? I Tried To Work With Nvidia/Cuda:10.0-Runtime-Ubuntu18.04 And Got The Error "Trains_Agent: Error: Error: Package Manager "Conda" Selected, But '

I use this docker nvidia/cuda:10.0-runtime-ubuntu18.04, I'm docker noob so far, so I will try to search, I assumed it installed python3.6 because it appears in the trains.conf
do you know if it just coming with python3.6?

3 years ago
0 Hi Everyone, I Am Trying To Use Docker Mode For Trains-Agent, But It Seems That It Has Problem With The Use Of Multiple Gpus This Is My Trains-Agent Command: Trains-Agent Daemon --Gpus 0,1 --Queue Dual_Gpu --Docker --Foreground And It Gets The Error: Doc

WackyRabbit7 thanks for the suggestions
the first suggestion (without the quote) get the same result.
the second produce
invalid argument "device="device=0,1"" for "--gpus" flag: parse error on line 1, column 7: bare " in non-quoted-field
(this produce the execute command)
Executing: ('docker', 'run', '-t', '--gpus', 'device="device=0,1"', '-e', 'TRAINS_WORKER_ID=lv-beast:gpu"device=0,1"', '-v', '/home/lv-beast/.git-credentials:/root/.git-credentials', '-v', '/home/lv-beast/.gitconfig:/roo...

3 years ago
0 Hi Everyone, I Am Trying To Use Docker Mode For Trains-Agent, But It Seems That It Has Problem With The Use Of Multiple Gpus This Is My Trains-Agent Command: Trains-Agent Daemon --Gpus 0,1 --Queue Dual_Gpu --Docker --Foreground And It Gets The Error: Doc

this is the error
Running Docker:

Executing: ('docker', 'run', '-t', '--gpus', 'device=0,1', '-e', 'TRAINS_WORKER_ID=lv-beast:gpu0,1', '-v', '/home/lv-beast/.git-credentials:/root/.git-credentials', '-v', '/home/lv-beast/.gitconfig:/root/.gitconfig', '-v', '/tmp/.trains_agent.li48l7ii.cfg:/root/trains.conf', '-v', '/tmp/trains_agent.ssh.uv6dxcw7:/root/.ssh', '-v', '/home/lv-beast/.trains/apt-cache.2:/var/cache/apt/archives', '-v', '/home/lv-beast/.trains/pip-cache:/root/.cache/pip', '-v', '/...

3 years ago
0 Hi Everyone, I Am Trying To Use Docker Mode For Trains-Agent, But It Seems That It Has Problem With The Use Of Multiple Gpus This Is My Trains-Agent Command: Trains-Agent Daemon --Gpus 0,1 --Queue Dual_Gpu --Docker --Foreground And It Gets The Error: Doc

yes, when I run docker itself
docker run --gpus '"device=0,1"' nvidia/cuda:10.1-base nvidia-smi

it work, but when I do with trains like WackyRabbit7 suggested (with same quotes):
trains-agent daemon --gpus '"device=0,1"' --queue dual_gpu --docker --foreground

it gives this error:
invalid argument "device="device=0,1"" for "--gpus" flag: parse error on line 1, column 7: bare " in non-quoted-field

3 years ago
0 Hi Everyone, I Am Trying To Use Docker Mode For Trains-Agent, But It Seems That It Has Problem With The Use Of Multiple Gpus This Is My Trains-Agent Command: Trains-Agent Daemon --Gpus 0,1 --Queue Dual_Gpu --Docker --Foreground And It Gets The Error: Doc

when I launch this:
(trains-agent) lv-beast@lv-beast:~/dev/MachineLearning/scripts/cmd_launcer$ docker run --gpus '"device=0,1"' nvidia/cuda:10.1-base nvidia-smi
it worked, so maybe its an issue with how trains pass the device to the docker run command?

3 years ago
0 Hi Everyone, I Am Trying To Use Docker Mode For Trains-Agent, But It Seems That It Has Problem With The Use Of Multiple Gpus This Is My Trains-Agent Command: Trains-Agent Daemon --Gpus 0,1 --Queue Dual_Gpu --Docker --Foreground And It Gets The Error: Doc

you are right, I have only 2 gpus right now, so basically I can launch --gpus all and it will work
but I want to create the scripts for longer use (deploy on larger machines with more gpus)

docker:
Client: Docker Engine - Community
Version: 19.03.6
API version: 1.40
Go version: go1.12.16
Git commit: 369ce74a3c
Built: Thu Feb 13 01:27:49 2020
OS/Arch: linux/amd64
Experimental: false

Server: Docker Engine - Community
Engine:
V...

3 years ago
3 years ago
0 Hi Everyone, I Tried To Launch Experiments Using Conda With Different Cuda Versions, I Tried To Comment This Fields From The Trains.Conf File On The Remove Machine #Cuda_Version: 10.1 #Cudnn_Version: 7.0 But It Seems That When I Comment It (Like A

Hi TimelyPenguin76
you are right, it written cuda version 10.2 (even though I installed only cuda 10.1, weird)
do you know why it's 10.2?
and do you know why trains count on that? (instead of looking in the python environment of the executed script?)

3 years ago
0 Hi Everyone, I Tried To Launch Experiments Using Conda With Different Cuda Versions, I Tried To Comment This Fields From The Trains.Conf File On The Remove Machine #Cuda_Version: 10.1 #Cudnn_Version: 7.0 But It Seems That When I Comment It (Like A

got it thanks!
Is it possible to use different dockers (containing different cuda versions) in different experiments?
or I have to open different queues for that? (or something like that)

3 years ago