Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
<image>

image
image

  
  
Posted 4 years ago
Votes Newest

Answers 46


we need to evaluate the result across many random seeds, so each task needs to log the result independently.

Ohh that kind of makes sense to me πŸ™‚
Yes I'm also getting:

/usr/local/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 74 leaked semaphores to clean up at shutdown
  len(cache))

Not sure about that ...

  
  
Posted 4 years ago

Hi @<1523710701275713536:profile|PompousHawk82> , can you try with the latest RC?

  
  
Posted 4 years ago

Thanks!

  
  
Posted 4 years ago

i can only see the init log

  
  
Posted 4 years ago

there is a semaphore warning, not sure if it’s related

Can you resend it?
Is the Task marked as closed when the process ends ?

  
  
Posted 4 years ago

works most of time, this occurs only few times

  
  
Posted 4 years ago

Nice job

  
  
Posted 4 years ago

image

  
  
Posted 4 years ago

Hmm so the Task.init should be called on the main process, this way the subprocess knows the Task is already created (you can call Task.init twice to get the task object). I wonder if we somehow can communicate between the sub processes without initializing in the main one...

  
  
Posted 4 years ago

🀞

  
  
Posted 4 years ago

So what if i want three tasks running in parallel, should i Task.init in the main process and change the task name in the sub process?

  
  
Posted 4 years ago

seems the not logging problem is back

  
  
Posted 4 years ago

Let me know if you need some tests

  
  
Posted 4 years ago

pip install clearml==1.0.3rc1
  
  
Posted 4 years ago

one quick question is that do i need to do some task.close() at the end of each process?

  
  
Posted 4 years ago

No need, it should auto close it if you started it with Task.init (or the agent executed it)

  
  
Posted 4 years ago

No sure, here is the code

  
  
Posted 4 years ago

Let me know if it solved it, if it did I'll make sure we push the RC

  
  
Posted 4 years ago

Wouldn't it make sense to use a single one ?

  
  
Posted 4 years ago

now it has log, but only the initial one

So the subprocesses are not logged ?

  
  
Posted 4 years ago

image

  
  
Posted 4 years ago

Releasing an RC

  
  
Posted 4 years ago

yeah sure

  
  
Posted 4 years ago

Let me check, see what can be learned ...

  
  
Posted 4 years ago

Hmm let me check something

  
  
Posted 4 years ago

Not sure on the cause but if you do:

mp.set_start_method('fork', force=True)

There is no semaphore leakage

  
  
Posted 4 years ago

and one experiment takes 40 hours to run, so i let them run in parallel

  
  
Posted 4 years ago

And only the main one ?

  
  
Posted 4 years ago

in my case, we need to evaluate the result across many random seeds, so each task needs to log the result independently.

  
  
Posted 4 years ago

Can I send you a wheel to test ?

  
  
Posted 4 years ago
112K Views
46 Answers
4 years ago
one year ago
Tags
Similar posts