Wouldn't it make sense to use a single one ?
in my case, we need to evaluate the result across many random seeds, so each task needs to log the result independently.
and one experiment takes 40 hours to run, so i let them run in parallel
we need to evaluate the result across many random seeds, so each task needs to log the result independently.
Ohh that kind of makes sense to me 🙂
Yes I'm also getting:
/usr/local/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 74 leaked semaphores to clean up at shutdown
len(cache))
Not sure about that ...
Let me check, see what can be learned ...
check on the iteration on the right side,
i tried to start the experiment few times, and sometimes, 1 or 2 of the experiment seems just won’t start
works most of time, this occurs only few times
Not sure on the cause but if you do:
mp.set_start_method('fork', force=True)
There is no semaphore leakage