Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi All, I Was Trying To Use Clearml-Task To Run A Custom Docker(With Poetry To Install All The Python Dependencies And Activated The Environment) Using Clearml Gpu, But It Seems Like Clearml Always Create A Virtual Environment And Run The Python Script Fr

Hi all, I was trying to use clearml-task to run a custom docker(with poetry to install all the python dependencies and activated the environment) using clearml GPU, but it seems like clearml always create a virtual environment and run the python script from /root/.clearml/venvs-builds/3.10/bin/python . Is there a way that I can have the clearml-task to automatically activated a virtual environment use the activated custom virtual environment in my docker and run the scripts from there instead of always creating a new venv inheriting from the clearml system_site_packages? I noticed that clearml.conf has a configuration agent.docker_use_activated_venv , but I am not sure how to enable it from clearml-task

  
  
Posted one year ago
Votes Newest

Answers 38


Hi @<1597762318140182528:profile|EnchantingPenguin77>

, but it seems like clearml always create a virtual environmen

Yes that's correct, but the new venv inside the container inherits from the system packages (so if nothing changes it does nothing)

Is there a way that I can have the clearml-task to automatically activated a virtual environment use the activated custom virtual environment in my docker and run the scripts

Yoo can but the "correct" way to work with python and containers is to actually install everything on the system (not venv)
That said, just set this env variable to point top the python binary inside your venv in the container
CLEARML_AGENT_SKIP_PIP_VENV_INSTALL=/root/venv/bin/python
None

  
  
Posted one year ago

Thanks @<1523701205467926528:profile|AgitatedDove14> . I just got an issue running clearml-task remotely, it has been working fine before today, but now every time I run clearml-task, it shows pending, and I've been waiting for 3 hours the status is still pending. The autoscalers was charging the hourly rate even though the task is still pending for 3 hours. From the console log of Clearml GPU instance, I saw it is listening to the queue, but there is no log even after 3 hours. There is nothing else I am running beside this one task, and seems like the worker never spin up again

2023-08-03 04:41:00,624 - clearml.Auto-Scaler - INFO - Spinning new instance resource='default', prefix='38ae71a80baf4a58893631d23c0c6e72_3090_1', queue='test-gpu'
2023-08-03 04:41:00,625 - clearml.Auto-Scaler - INFO - Creating instance for resource default
2023-08-03 04:41:01,027 - clearml.Auto-Scaler - INFO - New instance b97e702d-e2b3-4f28-adab-be59648601ea listening to test-gpu queue
  
  
Posted one year ago

Thanks for the detials @<1597762318140182528:profile|EnchantingPenguin77>

clearml.Auto-Scaler - INFO - New instance b97e702d-e2b3-4f28-adab-be59648601ea listening to test-gpu queue

This looks like a new agent was spined on your EC2 account, can you see it in the "Workers" page ?

  
  
Posted one year ago

@<1523701205467926528:profile|AgitatedDove14> Yes I cansee the worker:
image

  
  
Posted one year ago

It seems like CPU is working on something, I saw the usage is spiking periodically but I didn't run any task this morning

  
  
Posted one year ago

Click on the Task it is running and abort it, it seems to be stuck, I guess this is why the others are not pulled

  
  
Posted one year ago

I actually have aborted it

  
  
Posted one year ago

but it still not is able to run any task after I abort and rerun another task

  
  
Posted one year ago

is it displaying that it is running anything?

  
  
Posted one year ago

nope

  
  
Posted one year ago

There is nothing on the queue and worker
image

  
  
Posted one year ago

but it still not is able to run any task after I abort and rerun another task

When you "run" a task you are pushing it to a queue, so how come a queue is empty? what happens after you push your newly cloned task to the queue ?

  
  
Posted one year ago

The queue will be empty when I run task

  
  
Posted one year ago

Actually never mind, it's working now!

  
  
Posted one year ago

it has been pending whole day yesterday, but today it's able to run the task

  
  
Posted one year ago

@<1523701205467926528:profile|AgitatedDove14> Is there any reason why you mentioned that the "correct" way to work with python and containers is to actually install everything on the system (not venv)?

  
  
Posted one year ago

Yes, because when a container is executed, the agent creates a new venv and inherits from the system wide installed packages, but it cannot inherit or "understand" there is an existing venv, and where it is.

  
  
Posted one year ago

@<1523701205467926528:profile|AgitatedDove14> I'm trying to run Clearml GPU compute(RTX 3080) with pytorch-lightning but keep getting CUDA error. Is there any specific CUDA/Ubuntu/torch/python version required? I tried several different version but can't make it work

FROM nvidia/cuda:11.8.0-cudnn8-devel-ubuntu22.04 as telos_algorithms
  File "/code/.venv/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 1013, in _run_stage
    with isolate_rng():
  File "/.pyenv/versions/3.10.9/lib/python3.10/contextlib.py", line 135, in __enter__
    return next(self.gen)
  File "/code/.venv/lib/python3.10/site-packages/lightning/pytorch/utilities/seed.py", line 42, in isolate_rng
    states = _collect_rng_states(include_cuda)
  File "/code/.venv/lib/python3.10/site-packages/lightning/fabric/utilities/seed.py", line 115, in _collect_rng_states
    states["torch.cuda"] = torch.cuda.get_rng_state_all()
  File "/code/.venv/lib/python3.10/site-packages/torch/cuda/random.py", line 39, in get_rng_state_all
    results.append(get_rng_state(i))
  File "/code/.venv/lib/python3.10/site-packages/torch/cuda/random.py", line 22, in get_rng_state
    _lazy_init()
  File "/code/.venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 247, in _lazy_init
    torch._C._cuda_init()
RuntimeError: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 804: forward compatibility was attempted on non supported HW
Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.
  
  
Posted one year ago

@<1597762318140182528:profile|EnchantingPenguin77> can you provide the full log?

  
  
Posted one year ago

Here it is @<1523701205467926528:profile|AgitatedDove14>

  
  
Posted one year ago

well I do not think you set your pytorch lightining to use cuda:

GPU available: True (cuda), used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
/code/.venv/lib/python3.9/site-packages/lightning/pytorch/trainer/setup.py:176: PossibleUserWarning: GPU available but not used. Set `accelerator` and `devices` using `Trainer(accelerator='gpu', devices=1)`.
  
  
Posted one year ago

I see, seems like the -args for scripts didn't passed to the docker:

--script fluoro_motion_detection/src/run/main.py \
--args experiment=example.yaml \
  
  
Posted one year ago

I was trying to run python main.py experiemnt=example.yaml

  
  
Posted one year ago

Notice you should be able to override them in the UI (under Args seciton)

  
  
Posted one year ago

I did use --args to clearml-task command for this run, but it looks like the docker didn't take it
image

  
  
Posted one year ago

you should have a gpu argument there, set it to true

  
  
Posted one year ago

the gpu arugment is actually inside my example.yaml:

defaults:
  - default.yaml

accelerator: gpu
devices: 1
  
  
Posted one year ago

And how did you connect your example,yaml?

  
  
Posted one year ago

I am using hydra in main.py

  
  
Posted one year ago

# 

from typing import List, Optional, Tuple
import pyrootutils
import lightning
import hydra
from clearml import Task
from omegaconf import DictConfig, OmegaConf
from lightning import LightningDataModule, LightningModule, Trainer, Callback
from lightning.pytorch.loggers import Logger

pyrootutils.setup_root(__file__, indicator="pyproject.toml", pythonpath=True)
# ------------------------------------------------------------------------------------ #
# the setup_root above is equivalent to:
# - adding project root dir to PYTHONPATH
#       (so you don't need to force user to install project as a package)
#       (necessary before importing any local modules e.g. `from src import utils`)
# - setting up PROJECT_ROOT environment variable
#       (which is used as a base for paths in "configs/paths/default.yaml")
#       (this way all filepaths are the same no matter where you run the code)
# - loading environment variables from ".env" in root dir
#
# you can remove it if you:
# 1. either install project as a package or move entry files to project root dir
# 2. set `root_dir` to "." in "configs/paths/default.yaml"
#
# more info: 

# ------------------------------------------------------------------------------------ #

from src.utils.pylogger import get_pylogger
from src.utils.instantiators import instantiate_callbacks, instantiate_loggers

log = get_pylogger(__name__)


def train(cfg: DictConfig):
    # set seed for random number generators in pytorch, numpy and python.random
    if cfg.get("seed"):
        lightning.seed_everything(cfg.seed, workers=True)

    log.info(f"Instantiating datamodule <{cfg.data._target_}>")
    datamodule: LightningDataModule = hydra.utils.instantiate(cfg.data)

    log.info(f"Instantiating model <{cfg.model._target_}>")
    model: LightningModule = hydra.utils.instantiate(cfg.model)

    log.info("Instantiating callbacks...")
    callbacks: List[Callback] = instantiate_callbacks(cfg.get("callbacks"))

    log.info("Instantiating loggers...")
    logger: List[Logger] = instantiate_loggers(cfg.get("logger"))

    log.info(f"Instantiating trainer <{cfg.trainer._target_}>")
    trainer: Trainer = hydra.utils.instantiate(cfg.trainer, callbacks=callbacks, logger=logger)

    if cfg.get("train"):
        log.info("Starting training!")
        trainer.fit(model=model, datamodule=datamodule, ckpt_path=cfg.get("ckpt_path"))

    if cfg.get("test"):
        log.info("Starting testing!")
        ckpt_path = trainer.checkpoint_callback.best_model_path
        if ckpt_path == "":
            log.warning("Best ckpt not found! Using current weights for testing...")
            ckpt_path = None
        trainer.test(model=model, datamodule=datamodule, ckpt_path=ckpt_path)
        log.info(f"Best ckpt path: {ckpt_path}")


@hydra.main(version_base="1.3", config_path="../../configs", config_name="train.yaml")
def main(cfg: DictConfig):
    OmegaConf.set_struct(cfg, False)  # allow cfg to be mutable

    task = Task.init(project_name="fluoro-motion-detection", task_name="uniformer-test")
    logger = task.get_logger()
    logger.report_text("You can view your full hydra configuration under Configuration tab in the UI")
    print(OmegaConf.to_yaml(cfg))

    train(cfg)


if __name__ == "__main__":
    main()
  
  
Posted one year ago