Hi @<1558624430622511104:profile|PanickyBee11> , how are you doing the multi node training?
I think prefix would be great. It can also make it easier for reporting scalars in general
Actually those are "supposed" to be collected automatically by pytorch and reported by the master node.
currently we need a barrier to sync all nodes before reporting a scalar which makes it slower.
Also "should" be part of pytorch ddp
It's launched with torchrun
I know there is an integration with torchrun (the under the hood infrastructure) effort, I'm not sure where it stands...
So are you using None on multiple machines to "launch" the training process?
I think prefix would be great. It can also make it easier for reporting scalars in general (save the users the need to manually add the rank label). I
So I think this might work (forgive the typos this is not fully tested š )
def get_resource_monitor_cls():
from clearml.utilities.resource_monitor import ResourceMonitor
from clearml.config import get_node_count, get_node_id
class NodeResourceMonitor(ResourceMonitor):
_title_machine = ':monitor:machine_{}'.format(get_node_id()) if get_node_count() else ResourceMonitor._title_machine
_title_gpu = ':monitor:node{}_gpu'.format(get_node_id())) if get_node_count() else ResourceMonitor._title_gpu
task = Task.init(..., auto_resource_monitoring=get_resource_monitor_cls())
If it actually works please PR it š (it probably should also check that it is being launched with elastic agent)
@<1523701205467926528:profile|AgitatedDove14> yes & yes, multiple machines and reporting to the same task.
It's launched with torchrun https://pytorch.org/docs/stable/elastic/run.html
I think prefix would be great. It can also make it easier for reporting scalars in general (save the users the need to manually add the rank label). It can also be great to support adding the average of all nodes at the UI level, currently we need a barrier to sync all nodes before reporting a scalar which makes it slower.
multiple machines and reporting to the same task.
Out of curiosity , how do you launch it on multiple machines?
reporting to the same task.
So the "funny" think is, they all report on on top (overwriting) the other...
In order for them to report individually, it might be that you need multiple Tasks (i.e. one per machine)
Maybe we could somehow have prefix with rank on the cpu/network etc?! or should it be a different "title", wdyt?
pytorch ddp @<1523701070390366208:profile|CostlyOstrich36>
@<1558624430622511104:profile|PanickyBee11> how are you launching the code on multiple machines ?
are they all reporting to the same Task?