Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi All, I Was Trying To Use Clearml-Task To Run A Custom Docker(With Poetry To Install All The Python Dependencies And Activated The Environment) Using Clearml Gpu, But It Seems Like Clearml Always Create A Virtual Environment And Run The Python Script Fr

Hi all, I was trying to use clearml-task to run a custom docker(with poetry to install all the python dependencies and activated the environment) using clearml GPU, but it seems like clearml always create a virtual environment and run the python script from /root/.clearml/venvs-builds/3.10/bin/python . Is there a way that I can have the clearml-task to automatically activated a virtual environment use the activated custom virtual environment in my docker and run the scripts from there instead of always creating a new venv inheriting from the clearml system_site_packages? I noticed that clearml.conf has a configuration agent.docker_use_activated_venv , but I am not sure how to enable it from clearml-task

  
  
Posted 7 months ago
Votes Newest

Answers 38


Hi @<1597762318140182528:profile|EnchantingPenguin77>

, but it seems like clearml always create a virtual environmen

Yes that's correct, but the new venv inside the container inherits from the system packages (so if nothing changes it does nothing)

Is there a way that I can have the clearml-task to automatically activated a virtual environment use the activated custom virtual environment in my docker and run the scripts

Yoo can but the "correct" way to work with python and containers is to actually install everything on the system (not venv)
That said, just set this env variable to point top the python binary inside your venv in the container
CLEARML_AGENT_SKIP_PIP_VENV_INSTALL=/root/venv/bin/python
None

  
  
Posted 7 months ago

it has been pending whole day yesterday, but today it's able to run the task

  
  
Posted 7 months ago

Here it is @<1523701205467926528:profile|AgitatedDove14>

  
  
Posted 6 months ago

okay, when I run main.py on my local machine, I can use python main.py experiement=example.yaml to override acceleator to GPU option. But seems like the --args experiement=example.yaml in clearml-task didn't work so I have to manually modify it on UI?

clearml-task \
    --project fluoro-motion-detection \
    --name uniformer-test \
    --repo git@github.com:imperative-care-campbell/algorithms-python.git \
    --branch SW-956-Fluoro-Motion-Detection \
    --script fluoro_motion_detection/src/run/main.py \
    --args experiment=example.yaml \
    --docker mzhengtelos/algorithm-ml:pyenv \
    --docker_args "--env CLEARML_AGENT_SKIP_PIP_VENV_INSTALL=$PYTHON_ENV_DIR --env AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID --env AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY" \
    --queue test-gpu
  
  
Posted 6 months ago

I've added gpu:True to my hydra config file but the GPU is still not used

  
  
Posted 6 months ago

I see, like that?
image

  
  
Posted 6 months ago

Not the file the UI

  
  
Posted 6 months ago

I see, seems like the -args for scripts didn't passed to the docker:

--script fluoro_motion_detection/src/run/main.py \
--args experiment=example.yaml \
  
  
Posted 6 months ago

I got the same cuda issue after being able to use GPU
image

  
  
Posted 6 months ago

Actually never mind, it's working now!

  
  
Posted 7 months ago

# 

from typing import List, Optional, Tuple
import pyrootutils
import lightning
import hydra
from clearml import Task
from omegaconf import DictConfig, OmegaConf
from lightning import LightningDataModule, LightningModule, Trainer, Callback
from lightning.pytorch.loggers import Logger

pyrootutils.setup_root(__file__, indicator="pyproject.toml", pythonpath=True)
# ------------------------------------------------------------------------------------ #
# the setup_root above is equivalent to:
# - adding project root dir to PYTHONPATH
#       (so you don't need to force user to install project as a package)
#       (necessary before importing any local modules e.g. `from src import utils`)
# - setting up PROJECT_ROOT environment variable
#       (which is used as a base for paths in "configs/paths/default.yaml")
#       (this way all filepaths are the same no matter where you run the code)
# - loading environment variables from ".env" in root dir
#
# you can remove it if you:
# 1. either install project as a package or move entry files to project root dir
# 2. set `root_dir` to "." in "configs/paths/default.yaml"
#
# more info: 

# ------------------------------------------------------------------------------------ #

from src.utils.pylogger import get_pylogger
from src.utils.instantiators import instantiate_callbacks, instantiate_loggers

log = get_pylogger(__name__)


def train(cfg: DictConfig):
    # set seed for random number generators in pytorch, numpy and python.random
    if cfg.get("seed"):
        lightning.seed_everything(cfg.seed, workers=True)

    log.info(f"Instantiating datamodule <{cfg.data._target_}>")
    datamodule: LightningDataModule = hydra.utils.instantiate(cfg.data)

    log.info(f"Instantiating model <{cfg.model._target_}>")
    model: LightningModule = hydra.utils.instantiate(cfg.model)

    log.info("Instantiating callbacks...")
    callbacks: List[Callback] = instantiate_callbacks(cfg.get("callbacks"))

    log.info("Instantiating loggers...")
    logger: List[Logger] = instantiate_loggers(cfg.get("logger"))

    log.info(f"Instantiating trainer <{cfg.trainer._target_}>")
    trainer: Trainer = hydra.utils.instantiate(cfg.trainer, callbacks=callbacks, logger=logger)

    if cfg.get("train"):
        log.info("Starting training!")
        trainer.fit(model=model, datamodule=datamodule, ckpt_path=cfg.get("ckpt_path"))

    if cfg.get("test"):
        log.info("Starting testing!")
        ckpt_path = trainer.checkpoint_callback.best_model_path
        if ckpt_path == "":
            log.warning("Best ckpt not found! Using current weights for testing...")
            ckpt_path = None
        trainer.test(model=model, datamodule=datamodule, ckpt_path=ckpt_path)
        log.info(f"Best ckpt path: {ckpt_path}")


@hydra.main(version_base="1.3", config_path="../../configs", config_name="train.yaml")
def main(cfg: DictConfig):
    OmegaConf.set_struct(cfg, False)  # allow cfg to be mutable

    task = Task.init(project_name="fluoro-motion-detection", task_name="uniformer-test")
    logger = task.get_logger()
    logger.report_text("You can view your full hydra configuration under Configuration tab in the UI")
    print(OmegaConf.to_yaml(cfg))

    train(cfg)


if __name__ == "__main__":
    main()
  
  
Posted 6 months ago

image

  
  
Posted 6 months ago

I am using hydra in main.py

  
  
Posted 6 months ago

but it still not is able to run any task after I abort and rerun another task

When you "run" a task you are pushing it to a queue, so how come a queue is empty? what happens after you push your newly cloned task to the queue ?

  
  
Posted 7 months ago

I actually have aborted it

  
  
Posted 7 months ago

And how did you connect your example,yaml?

  
  
Posted 6 months ago

the gpu arugment is actually inside my example.yaml:

defaults:
  - default.yaml

accelerator: gpu
devices: 1
  
  
Posted 6 months ago

but it still not is able to run any task after I abort and rerun another task

  
  
Posted 7 months ago

is it displaying that it is running anything?

  
  
Posted 7 months ago

None
See: Add an experiment hyperparameter:
and add gpu : True

  
  
Posted 6 months ago

@<1523701205467926528:profile|AgitatedDove14> Yes I cansee the worker:
image

  
  
Posted 7 months ago

Click on the Task it is running and abort it, it seems to be stuck, I guess this is why the others are not pulled

  
  
Posted 7 months ago

Thanks @<1523701205467926528:profile|AgitatedDove14> . I just got an issue running clearml-task remotely, it has been working fine before today, but now every time I run clearml-task, it shows pending, and I've been waiting for 3 hours the status is still pending. The autoscalers was charging the hourly rate even though the task is still pending for 3 hours. From the console log of Clearml GPU instance, I saw it is listening to the queue, but there is no log even after 3 hours. There is nothing else I am running beside this one task, and seems like the worker never spin up again

2023-08-03 04:41:00,624 - clearml.Auto-Scaler - INFO - Spinning new instance resource='default', prefix='38ae71a80baf4a58893631d23c0c6e72_3090_1', queue='test-gpu'
2023-08-03 04:41:00,625 - clearml.Auto-Scaler - INFO - Creating instance for resource default
2023-08-03 04:41:01,027 - clearml.Auto-Scaler - INFO - New instance b97e702d-e2b3-4f28-adab-be59648601ea listening to test-gpu queue
  
  
Posted 7 months ago

I did use --args to clearml-task command for this run, but it looks like the docker didn't take it
image

  
  
Posted 6 months ago

you should have a gpu argument there, set it to true

  
  
Posted 6 months ago

Notice you should be able to override them in the UI (under Args seciton)

  
  
Posted 6 months ago

It seems like CPU is working on something, I saw the usage is spiking periodically but I didn't run any task this morning

  
  
Posted 7 months ago

Yes, because when a container is executed, the agent creates a new venv and inherits from the system wide installed packages, but it cannot inherit or "understand" there is an existing venv, and where it is.

  
  
Posted 6 months ago

The queue will be empty when I run task

  
  
Posted 7 months ago

@<1523701205467926528:profile|AgitatedDove14> Is there any reason why you mentioned that the "correct" way to work with python and containers is to actually install everything on the system (not venv)?

  
  
Posted 6 months ago