the gpu arugment is actually inside my example.yaml:
defaults:
- default.yaml
accelerator: gpu
devices: 1
Click on the Task it is running and abort it, it seems to be stuck, I guess this is why the others are not pulled
, but it seems like clearml always create a virtual environmen
Yes that's correct, but the new venv inside the container inherits from the system packages (so if nothing changes it does nothing)
Is there a way that I can have the clearml-task to automatically activated a virtual environment use the activated custom virtual environment in my docker and run the scripts
Yoo can but the "correct" way to work with python and containers is to actually install everything on the system (not venv)
That said, just set this env variable to point top the python binary inside your venv in the container
CLEARML_AGENT_SKIP_PIP_VENV_INSTALL=/root/venv/bin/python
None
None
See: Add an experiment hyperparameter:
and add gpu
: True
okay, when I run main.py on my local machine, I can use python main.py experiement=example.yaml
to override acceleator to GPU option. But seems like the --args experiement=example.yaml
in clearml-task didn't work so I have to manually modify it on UI?
clearml-task \
--project fluoro-motion-detection \
--name uniformer-test \
--repo git@github.com:imperative-care-campbell/algorithms-python.git \
--branch SW-956-Fluoro-Motion-Detection \
--script fluoro_motion_detection/src/run/main.py \
--args experiment=example.yaml \
--docker mzhengtelos/algorithm-ml:pyenv \
--docker_args "--env CLEARML_AGENT_SKIP_PIP_VENV_INSTALL=$PYTHON_ENV_DIR --env AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID --env AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY" \
--queue test-gpu