Reputation
Badges 1
89 × Eureka!Thank you for your help @<1523701205467926528:profile|AgitatedDove14>
"Original PIP" is empty as for this task we can rely on the docker image to provide the python packages
@<1523701205467926528:profile|AgitatedDove14> if we go with the ultralytics case:
INSTALLED PACKAGES for working manual execution
absl-py==2.1.0
albucore==0.0.13
albumentations==1.4.14
anaconda-anon-usage @ file:///croot/anaconda-anon-usage_1710965072196/work
annotated-types==0.7.0
anyio==4.4.0
archspec @ file:///croot/archspec_1709217642129/work
astor==0.8.1
asttokens @ file:///opt/conda/conda-bld/asttokens_1646925590279/work
astunparse==1.6.3
attrs @ file:///croot/attrs_169571782329...
As I get a bunch of these warnings in both of the clones that failed
WARNING:clearml_agent.helper.package.requirements:Local file not found [torch-tensorrt @ file:///opt/pytorch/torch_tensorrt/py/dist/torch_tensorrt-1.3.0a0-cp38-cp38-linux_x86_64.whl], references removed
Container nvcr.io/nvidia/pytorch:22.12-py3
The original run completes successfully, it's only the runs cloned from the GUI which fail
Resetting and enqueuing task which has built successfully also fails 😞
In a cloned run with new container ultralytics/ultralytics:latest
I get this error:
clearml_agent: ERROR: Could not install task requirements!
Command '['/root/.clearml/venvs-builds/3.10/bin/python', '-m', 'pip', '--disable-pip-version-check', 'install', '-r', '/tmp/cached-reqs7171xfem.txt', '--extra-index-url', '
', '--extra-index-url', '
returned non-zero exit status 1.
Maybe it's related to this section?
WARNING:clearml_agent.helper.package.requirements:Local file not found [anaconda-anon-usage @ file:///croot/anaconda-anon-usage_1710965072196/work], references removed
is this what you had on the Original manual execution ?
Yes this installed packages list is what succeeded via manual submission to agent
How are you getting:
beautifulsoup4 @ file:///croot/beautifulsoup4-split_1681493039619/work
This comes with the docker image ultralytics/ultralytics:latest
Hi @<1523701205467926528:profile|AgitatedDove14>
ClearML Agent 1.9.0
Final answer was
docker="ultralytics/ultralytics:latest",
docker_args=["--network=host", "--ipc=host"],
agent.package_manager.pip_version=""
@<1523701070390366208:profile|CostlyOstrich36> do you have any ideas?
@<1523701070390366208:profile|CostlyOstrich36> same error now 😞
Environment setup completed successfully
Starting Task Execution:
/root/.clearml/venvs-builds/3.8/lib/python3.8/site-packages/torch/cuda/__init__.py:128: UserWarning: CUDA initialization: The NVIDIA driver on your system is too old (found version 11020). Please update your GPU driver by downloading and installing a new version from the URL:
Alternatively, go to:
to install a PyTo...
If I run nvidia-smi it returns valid output and it says the CUDA version is 11.2
This has been resolved now! Thank you for your help @<1523701070390366208:profile|CostlyOstrich36>
I can install the correct torch version with this command:pip install --pre torchvision --force-reinstall --index-url ` None ```
DEBUG Installing build dependencies ... [?25l- \ | / - done
[?25h Getting requirements to build wheel ... [?25l- error
[1;31merror[0m: [1msubprocess-exited-with-error[0m
[31m×[0m [32mGetting requirements to build wheel[0m did not run successfully.
[31m│[0m exit code: [1;36m1[0m
[31m╰─>[0m [31m[21 lines of output][0m
[31m [0m Traceback (most recent call last):
[31m [0m File "/root/.clearml/venvs-builds/3.8/lib/python3.8/site-packages/pip/_vendor/pyproject_hooks/_i...
Collecting pip<20.2
Using cached pip-20.1.1-py2.py3-none-any.whl (1.5 MB)
Installing collected packages: pip
Attempting uninstall: pip
Found existing installation: pip 20.0.2
Not uninstalling pip at /usr/lib/python3/dist-packages, outside environment /usr
Can't uninstall 'pip'. No files were found to uninstall.
I have set agent.package_manager.pip_version=""
which resolved that message
ERROR: This container was built for NVIDIA Driver Release 530.30 or later, but
version 460.32.03 was detected and compatibility mode is UNAVAILABLE.
[[System has unsupported display driver / cuda driver combination (CUDA_ERROR_SYSTEM_DRIVER_MISMATCH) cuInit()=803]]
to achieve running both the agent and the deployment on the same machine, adding --network=host to the run arguments solved it!
Solved that by setting docker_args=["--privileged", "--network=host"]