Unanswered
Hi Everyone,
Additional Arguments To The Script Execution, Is It Possible? How Can It Be Done?
So At The Moment When My Script Is Being Executed The
I created a wrapper to work like executing python -m torch.distributed.launch --nproc_per_node 2 ./my_script.py
but from my script. I do call trains.init
in the subprocesses, I the actually difference between the subproceses supposed to be, in terms or arguments, local_rank
that's all.It may be possible and that I'm not distributing the model between the GPUs in an optimal way or at least in a way that matches your framework.
If you have an example it would be great.
158 Views
0
Answers
4 years ago
one year ago