Hi SubstantialElk6 ,
You can use any docker image you have access too.
Can you attach the logs with the error? virtualenv should be install with the clearml-agent
Hi, so you meant i need to installl virtualenv in my base image?
yep, you need it to be part of the environment
Hi this is the log. I didn't see any attempt from the agent to install virtualenv on the base image.
` 1618369068169 clearml-gpu-id-b926b4b809f544c49e99625380a1534b:gpuGPU-4ad68290-0daf-4634-6768-16fad73d47a3 DEBUG Current configuration (clearml_agent v0.17.2, location: /tmp/.clearml_agent.wgsmv2t9.cfg):
agent.worker_id = clearml-gpu-id-b926b4b809f544c49e99625380a1534b:gpuGPU-4ad68290-0daf-4634-6768-16fad73d47a3
agent.worker_name = clearml-gpu-id-b926b4b809f544c49e99625380a1534b
agent.force_git_ssh_protocol = false
agent.python_binary =
agent.package_manager.type = pip
agent.package_manager.pip_version = <20.2
agent.package_manager.system_site_packages = true
agent.package_manager.force_upgrade = false
agent.package_manager.conda_channels.0 = defaults
agent.package_manager.torch_nightly = false
agent.package_manager.force_repo_requirements_txt = true
agent.package_manager.priority_packages.0 = wheel
agent.package_manager.priority_packages.1 = setuptools
agent.venvs_dir = /root/.clearml/venvs-builds
agent.venvs_cache.max_entries = 10
agent.venvs_cache.free_space_threshold_gb = 2.0
agent.vcs_cache.enabled = true
agent.vcs_cache.path = /root/.clearml/vcs-cache
agent.venv_update.enabled = false
agent.pip_download_cache.enabled = true
agent.pip_download_cache.path = /root/.clearml/pip-download-cache
agent.translate_ssh = true
agent.reload_config = false
agent.docker_pip_cache = /root/.clearml/pip-cache
agent.docker_apt_cache = /root/.clearml/apt-cache
agent.docker_force_pull = true
agent.default_docker.image = harbor.io/nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04
agent.default_docker.arguments.0 = --ipc=host
agent.enable_task_env = true
agent.git_user =
agent.extra_docker_shell_script.0 = apt-get install -y build-essential
agent.default_python = 3.7
agent.cuda_version = 102
agent.cudnn_version = 0
api.version = 1.5
api.verify_certificate = true
api.default_version = 1.5
api.http.max_req_size = 15728640
api.http.retries.total = 240
api.http.retries.connect = 240
api.http.retries.read = 240
api.http.retries.redirect = 240
api.http.retries.status = 240
api.http.retries.backoff_factor = 1.0
api.http.retries.backoff_max = 120.0
api.http.wait_on_maintenance_forever = true
api.http.pool_maxsize = 512
api.http.pool_connections = 512
api.api_server =
api.web_server =
api.files_server =
api.credentials.access_key = FFKW1VX07E8TVW5N2CWL
api.host =
sdk.storage.cache.default_base_dir = ~/.clearml/cache
sdk.storage.cache.size.min_free_bytes = 10GB
sdk.storage.direct_access.0.url = file://*
sdk.metrics.file_history_size = 100
sdk.metrics.matplotlib_untitled_history_size = 100
sdk.metrics.images.format = JPEG
sdk.metrics.images.quality = 87
sdk.metrics.images.subsampling = 0
sdk.metrics.tensorboard_single_series_per_graph = false
sdk.network.metrics.file_upload_threads = 4
sdk.network.metrics.file_upload_starvation_warning_sec = 120
sdk.network.iteration.max_retries_on_server_error = 5
sdk.network.iteration.retry_backoff_factor_sec = 10
sdk.aws.s3.key =
sdk.aws.s3.region =
sdk.aws.s3.credentials.0.host = 192.168.56.253:9000
sdk.aws.s3.credentials.0.key = minioadmin
sdk.aws.s3.credentials.0.multipart = false
sdk.aws.s3.credentials.0.secure = false
sdk.aws.boto3.pool_connections = 512
sdk.aws.boto3.max_multipart_concurrency = 16
sdk.log.null_log_propagate = false
sdk.log.task_log_buffer_capacity = 66
sdk.log.disable_urllib3_info = true
sdk.development.task_reuse_time_window_in_hours = 72.0
sdk.development.vcs_repo_detect_async = true
sdk.development.store_uncommitted_code_diff = true
sdk.development.support_stopping = true
sdk.development.default_output_uri =
sdk.development.force_analyze_entire_repo = false
sdk.development.suppress_update_message = false
sdk.development.detect_with_pip_freeze = false
sdk.development.worker.report_period_sec = 2
sdk.development.worker.ping_period_sec = 5
sdk.development.worker.log_stdout = true
sdk.development.worker.report_global_mem_used = false
Executing task id [b926b4b809f544c49e99625380a1534b]:
repository =
branch = master
version_num = 1c76623388d72e754fa289d41e197781760f6cc3
tag =
docker_cmd = harbor.io/custom/pg_es_base_app:latest --env GIT_SSL_NO_VERIFY=true --env TRAINS_AGENT_GIT_USER=testuser --env TRAINS_AGENT_GIT_PASS=testuser
entry_point = flert_model.py
working_dir = app
[package_manager.force_repo_requirements_txt=true] Skipping requirements, using repository "requirements.txt"
/usr/bin/python3.6: No module named virtualenv
clearml_agent: ERROR: Command '['python3.6', '-m', 'virtualenv', '/root/.clearml/venvs-builds/3.6', '--system-site-packages']' returned non-zero exit status 1. `
SubstantialElk6 you are right, only agent running docker mode will do it, you are running venv mode.
The clearml-agent will try to build a specific virtual environment for your task, with virtualenv
. You can just install it in the environment the clearml-agent is running from (python3.6?) with python3.6 -m pip install virtualenv
and it should work 🙂