Unanswered
Another Problem Is That When Using Mp.Spawn To Init Distributed Training In Pytorch,
Yes, when i put the task init into the spawn function, it can run without error, but it seems that each of the child process has their own experimentsClearML Task: created new task id=54ce0761934c42dbacb02a5c059314da ClearML Task: created new task id=fe66f8ec29a1476c8e6176989a4c67e9 ClearML results page:
ClearML results page:
ClearML Task: overwriting (reusing) task id=de46ccdfb6c047f689db6e50e6fb8291 ClearML Task: created new task id=91f891a272364713a4c3019d0afa058e ClearML results page:
ClearML results page:
and it shows some errors at init
158 Views
0
Answers
3 years ago
one year ago