Reputation
Badges 1
43 × Eureka!@<1523701087100473344:profile|SuccessfulKoala55>
for example, global rank from failed task in first scenario

Hi @<1523701087100473344:profile|SuccessfulKoala55> No, I am using self-hosted ClearML enterprise server
@<1523701070390366208:profile|CostlyOstrich36> Can you help me with my question?
@<1523701435869433856:profile|SmugDolphin23>
Logs of rank0:
Environment setup completed successfully
Starting Task Execution:
1718702244585 gpuvm-01:gpu3,0 DEBUG InsecureRequestWarning: Certificate verification is disabled! Adding certificate verification is strongly advised. See:
ClearML results page:
/projects/0eae440b14054464a3f9c808ad6447dd/experiments/beaa8c380f3c46f0b6f5a3feab514dc8/output/log
task id [beaa8c380f3c46f0b6f5a3feab514dc8]
world=4
...
@<1523701070390366208:profile|CostlyOstrich36>
@<1523701435869433856:profile|SmugDolphin23> hi! it works! thanks!
I had a similar behavior: the parameters for starting the pipeline are not selected in a detailes view, only in the table view
@<1523701070390366208:profile|CostlyOstrich36> If I run the pipeline with the same input parameters, all the steps will also be re-run, nothing will be taken from the cache
@<1523701070390366208:profile|CostlyOstrich36> Any ideas?
@<1523701070390366208:profile|CostlyOstrich36> can you help me?
Hi @<1523701435869433856:profile|SmugDolphin23> ! I set NODE_RANK in the environment and now
- if gpus=2, node=2, task.launch_multi_node(node) : three tasks are created, and two of which are completed, but one is failed. In this case, are created (gpus*nodes-1) of tasks, some of which crashes with an error, or they all fall with an error. the behavior is inconsistent.
- if gpus=2, node=2, task.launch_multi_node(node*gpus) : seven tasks are created.I n this case, all tasks are failed except t...
@<1523701435869433856:profile|SmugDolphin23> Everything worked after setting the variables: --env NCCL_IB_DISABLE=1 --env NCCL_SOCKET_IFNAME=ens192 --env NCCL_P2P_DISABLE=1. But previously, these variables were not required for a successful launch. When I run ddp training with two nodes , everything works for me now. But as soon as I increase their number ( nodes > 2 ), I get the following error.
Traceback (most recent call last):
File "/root/.clearml/venvs-builds/3.11/code/light...
@<1523701070390366208:profile|CostlyOstrich36> yes
The errors that occur in the second case are presented in this screenshots.

@<1523701435869433856:profile|SmugDolphin23> Each task shows that process allocates only 1 gpu out of 2 (all task have the same scalar as below)
If I understand correctly, the cache for pip is stored at /root/.cache/pip. How can I change it? The agent.docker_internal_mounts.pip_cache variable in the config also does not change anything.
@<1523701435869433856:profile|SmugDolphin23> if task.aunch_multi_node(4) , then all 4 tasks are failed
I store my data in s3 and clearml tracks this data. I want to migrate this data from one ClearML instance to another, that is, transfer it to another s3 and have a new ClearML instance track it
@<1523701070390366208:profile|CostlyOstrich36> Any ideas?
kubectl exec -it clearml-agent-85fd8ccc6d-7fdk7 -n clearml bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Defaulted container "k8s-glue" out of: k8s-glue, init-k8s-glue (init)
root@clearml-agent-85fd8ccc6d-7fdk7:~# cat /root/clearml.conf
agent.git_user=gitlab_agent
agent.git_pass=682S-pH9ay1nidsxBGyT
agent.cuda_version=118
#agent.docker_internal_mounts.venv_build=/home/s3_cache/venvs-builds
#agent.do...
@<1523701435869433856:profile|SmugDolphin23> It is possible to request up to 5 workers in the toy example with Feed Forward and MNIST, BUT it is not possible to request more than 2 workers on a real large model
I create a pipeline via PipelineController with adding a step as a function
pipe = PipelineController(
name=cfg.clearml.pipeline_name,
project=cfg.clearml.project_name,
target_project=True,
version=cfg.clearml.version,
add_pipeline_tags=True,
docker=cfg.clearml.dockerfile,
docker_args=DefaultMLPLATparam().docker_arg,
packages=packages,
retry_on_failure=3
)
for parameter in cfg.clearml.params:
pipe.add_...
in the clearml section in values.yaml:
clearml:
...
clearmlConfig: |-
agent.docker_pip_cache="/mnt/pip_cache"
@<1523701435869433856:profile|SmugDolphin23> gloo doesn't work for me either
but torch work with nccl and task.launch_multi_node
problems arise specifically with pytorch-lightning
do I understand correctly that it is impossible to disable the installation of system packages without CLEARML_AGENT_SKIP_PIP_VENV_INSTALL and CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL?
Hi @<1523701087100473344:profile|SuccessfulKoala55> where can I get examples of REST API requests for creating reports?
@<1523701070390366208:profile|CostlyOstrich36> i have 2 clearm-serving instances with endpoints