
Reputation
Badges 1
49 × Eureka!AgitatedDove14 I'm using that code in the meanwhile
` ### This script checks the number of GPUs, create a list like 0,1,2...
Then adds '--gpus' before that list of GPUs
NUM_GPUS=nvidia-smi -L | wc -l
NUM_GPUS=$(($NUM_GPUS-1))
OUT=()
if [ $NUM_GPUS -ge 0 ]
then
for i in $(seq 0 $NUM_GPUS); do OUT+=( "$i" ); done
echo ${OUT[*]// /|} | tr ' ' ',' | awk '{print "--gpus "$1}'
else
echo ""
fi `
AgitatedDove14 Well, after starting a new project it works. I guess it's a bug.
AgitatedDove14 I'm using both argpraser and sys.argv to start different processes that each of them will interact with a single GPU. So each process have a specific argument with a different value to differentiate between them. (only the main interact with trains). At the moment I encounter issues with getting the arguments from the processes I spawn. I'm explicitly calling python my_script.py --args...
and each process knows to interact with the other. It's a bit complicated to explai...
I created a wrapper to work like executing python -m torch.distributed.launch --nproc_per_node 2 ./my_script.py
but from my script. I do call trains.init
in the subprocesses, I the actually difference between the subproceses supposed to be, in terms or arguments, local_rank
that's all.It may be possible and that I'm not distributing the model between the GPUs in an optimal way or at least in a way that matches your framework.
If you have an example it would be great.
AgitatedDove14 I can't try the new agent at the moment, the OS is Ubuntu 18.04 more specifically: amazon/Deep Learning Base AMI (Ubuntu 18.04) Version 22.0
and no docker. Running on the machine.
AgitatedDove14 thanks, I'll check it out.
I actually tried to print the logging.getLogger("trains.frameworks").level
and it was ERROR as expected. Therefore I'm not quite sure that's the problem... next I thought to patch your functions.
the solution that worked: [logging.getLogger(name).setLevel(logging.ERROR) for name in logging.root.manager.loggerDict if "trains" in name]
AgitatedDove14 Drastic indeed, I belive I will lose all the trains logs that way. In that case I prefer to keep the redundant logs.
If you'll find a more specific solution I'll love to know what it is 🙂
I think if there's a default value it should override the type, otherwise go with the type
AgitatedDove14
These were the loggers names I can see locally running the code, it might differ running remotely.
['trains.utilities.pyhocon.config_parser', 'trains.utilities.pyhocon', 'trains.utilities', 'trains', 'trains.config', 'trains.storage', 'trains.metrics', 'trains.Repository Detection']
regarding repreduce it, have a long data processing after initializing the task and before setting the input model/output model.
AgitatedDove14 I've tried the drastic measure suggested above as I had a log file of 1gb filled with the trains.frameworks - WARNING - Could not retrieve model location, skipping auto model logging
It didn't work :S
yes, there's a use for empty strings, for example in text generation you may generate the next word given some prefix, the prefix may be an empty string.
AgitatedDove14 no, there's no reason in my case to pass an empty string. that's why I removed the type=str
part.
I thought to change to connected ditionary though.
AgitatedDove14 v0.14
yes, it was.
AgitatedDove14 ArgParser argument
AgitatedDove14 When the default is None I expect the default value to be None even if the type is str. But I'll use your recommendation 🙂
SteadyFox10 AgitatedDove14 Thanks, I really did change the name.
Yeah, I thought to use artifact, wondered if I can avoid using it or on the other hand, use only it just to define the "the model" as a folder.
Thanks.
TimelyPenguin76 the tags names are 'Epoch 1', 'Step 5705'
the return value of the InputModel(<Put a string copy from the UI with the tag id>).tags
is an empty array.
SteadyFox10 ModelCheckpoint is not for pytorch I think, couldn't find anything like it.
AgitatedDove14
I think exclusion of arguments from the arg praser is a good idea.
Regarding the other parameters such as the working directory and script path. I just want to automate it as when running the script from my local machine for the "template" of the experiment it gets values that won't work when running in the worker. I just thought it can be automated from the code.
the version of the agent (the worker that received the job was 0.14.1)
the one that created the template was 0.14.2
TimelyPenguin76 yes, both 0.15.1