Thank you for your response. It works. I want to run several workers simultaneously on the same GPU, because I have to train several, relatively simple and small, neural networks. It would be faster to train several of them at the same time on the same GPU, rather than do it consequently.
ExcitedSeaurchin87 , I think you can differentiate them by using different worker names. Try using the following environment variable when running the command:
I wonder, why do you want to run multiple workers on the same GPU?