Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Another Problem Is That When Using Mp.Spawn To Init Distributed Training In Pytorch,

Another problem is that when using mp.spawn to init distributed training in pytorch, Task.current_task().get_logger() on worker process will throw
'NoneType' object has no attribute 'get_logger' error

  
  
Posted 3 years ago
Votes Newest

Answers 8


trying to understand what reset the task

  
  
Posted 3 years ago

so basically, the spawn will run a function in several separate processes, so i followed the link you gave above and put task.init into that function.
i guess in this way, there will be multiple task.init running.

  
  
Posted 3 years ago

I think this is not related to pytorch, because it shows the same problem with mp spawn

  
  
Posted 3 years ago

Yes, when i put the task init into the spawn function, it can run without error, but it seems that each of the child process has their own experiments
ClearML Task: created new task id=54ce0761934c42dbacb02a5c059314da ClearML Task: created new task id=fe66f8ec29a1476c8e6176989a4c67e9 ClearML results page: ClearML results page: ClearML Task: overwriting (reusing) task id=de46ccdfb6c047f689db6e50e6fb8291 ClearML Task: created new task id=91f891a272364713a4c3019d0afa058e ClearML results page: ClearML results page:and it shows some errors at init

  
  
Posted 3 years ago

Hi PompousHawk82 . Are you running in parallel the several instances of the same code on the same task?

  
  
Posted 3 years ago

Hi PompousHawk82 . sorry for the delay, I missed the last message. Can you try adding in the spawn process to have task = Task.get_task(task_id=<Your main task Id>) instead of the Task.init call?

  
  
Posted 3 years ago

what i want to do is to init one task and multiple works can log with this one task in parallel. TimelyPenguin76

  
  
Posted 3 years ago
577 Views
8 Answers
3 years ago
one year ago
Tags