so basically, the spawn will run a function in several separate processes, so i followed the link you gave above and put task.init into that function.
i guess in this way, there will be multiple task.init running.
I think this is not related to pytorch, because it shows the same problem with mp spawn
trying to understand what reset the task
Hi PompousHawk82 . Are you running in parallel the several instances of the same code on the same task?
Hi PompousHawk82 ,
Can you try this - https://clearml.slack.com/archives/CTK20V944/p1582334614043800?thread_ts=1582240539.039600&cid=CTK20V944 ?
what i want to do is to init one task and multiple works can log with this one task in parallel. TimelyPenguin76
Yes, when i put the task init into the spawn function, it can run without error, but it seems that each of the child process has their own experimentsClearML Task: created new task id=54ce0761934c42dbacb02a5c059314da ClearML Task: created new task id=fe66f8ec29a1476c8e6176989a4c67e9 ClearML results page:
ClearML results page:
ClearML Task: overwriting (reusing) task id=de46ccdfb6c047f689db6e50e6fb8291 ClearML Task: created new task id=91f891a272364713a4c3019d0afa058e ClearML results page:
ClearML results page:
and it shows some errors at init
Hi PompousHawk82 . sorry for the delay, I missed the last message. Can you try adding in the spawn process to have task = Task.get_task(task_id=<Your main task Id>)
instead of the Task.init
call?