Unanswered
Hi, If I'Ve Clearml Agents Installed On Several Servers, Each With A Single Gpu. How Can I Train A Gpt2 Model That Would Require Multiple Gpus?
Thanks. The challenge we encountered is that we only expose our Devs to the ClearML queues, so users have no idea what's beyond the queue except that it will offer them the resources associated with the queue. In the backend, each queue is associated with more than one host.
So what we tried is as followed.
We create a train.py script much like what Tobias shared above. In this script, we use the socket library to pull the ipaddr.
import socket
hostname=socket.gethostname()
ipaddr=docker.gethostbyname(hostname)
Above script is then used to generate a ClearML Task.
Then we create a ClearML pipeline that look as follows, all from the same task.
|-- taskslave1
taskmaster--|-- taskslave2
|-- taskslave3
The i[addr from the master task is expected to be retrived and passed to the slave tasks as a argument.
Two problems come in when running the pipeline;
- Taskmaster is actually waiting to sync with the configured number of nodes, so its not returning and in turn the IP addr cannot be passed on to the slave nodes.
- The IPAddr pulled is actually that of the docker ip, which cannot be pinged from another host.
164 Views
0
Answers
one year ago
one year ago