Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hey There! Quick Question About Clearml-Agent, Docker And Conda. I’M Trying To Use Conda As Package Manager With An Agent, But I Get The Following Error Message:

Hey there! Quick question about clearml-agent, docker and conda. I’m trying to use conda as package manager with an agent, but I get the following error message:
clearml_agent: ERROR: ERROR: package manager "conda" selected, but 'conda' executable could not be locatedI tried installing Miniconda in almost every directory on the agent machine, but the error persists. Am I missing something obvious here? 🙂
I am running the agent in docker mode and so far there weren’t any problems with pip as package manager.

  
  
Posted one year ago
Votes Newest

Answers 9


Hi BitingKangaroo95 , are you using Windows?

  
  
Posted one year ago

I deployed a new instance real quick and installed everything again. Conda gets found via which conda and is also listed when I try it with echo $PATH . The error still persists.

  
  
Posted one year ago

By default, the agent will use the system PATH (env var) to locate conda, or will use which conda to try and locate the conda executable

  
  
Posted one year ago

Thank you very much for your quick reply!

Gladly 🙂

Can you verify conda can be located by one of these two options?

  
  
Posted one year ago

Thank you very much for your quick reply! Im using mac on my local machine and the agent is running on a linux vm (debian)

  
  
Posted one year ago

these are the last few lines from the console output - if it is helpful:
` Current configuration (clearml_agent v1.5.0, location: /tmp/clearml.conf):

api.version = 1.5
api.verify_certificate = true
api.default_version = 1.5
api.http.max_req_size = 15728640
api.http.retries.total = 240
api.http.retries.connect = 240
api.http.retries.read = 240
api.http.retries.redirect = 240
api.http.retries.status = 240
api.http.retries.backoff_factor = 1.0
api.http.retries.backoff_max = 120.0
api.http.wait_on_maintenance_forever = true
api.http.pool_maxsize = 512
api.http.pool_connections = 512
api.api_server = http://*****:8008
api.web_server = http://
***:8080
api.files_server = http://
:8081
api.credentials.access_key = **************
api.host = http://*********:8008
sdk.storage.cache.default_base_dir = /clearml_agent_cache
sdk.storage.cache.size.min_free_bytes = 10GB
sdk.storage.direct_access.0.url = file://

sdk.metrics.file_history_size = 100
sdk.metrics.matplotlib_untitled_history_size = 100
sdk.metrics.images.format = JPEG
sdk.metrics.images.quality = 87
sdk.metrics.images.subsampling = 0
sdk.metrics.tensorboard_single_series_per_graph = false
sdk.network.metrics.file_upload_threads = 4
sdk.network.metrics.file_upload_starvation_warning_sec = 120
sdk.network.iteration.max_retries_on_server_error = 5
sdk.network.iteration.retry_backoff_factor_sec = 10
sdk.aws.s3.key =
sdk.aws.s3.region =
sdk.aws.boto3.pool_connections = 512
sdk.aws.boto3.max_multipart_concurrency = 16
sdk.log.null_log_propagate = false
sdk.log.task_log_buffer_capacity = 66
sdk.log.disable_urllib3_info = true
sdk.development.task_reuse_time_window_in_hours = 72.0
sdk.development.vcs_repo_detect_async = true
sdk.development.store_uncommitted_code_diff = true
sdk.development.support_stopping = true
sdk.development.default_output_uri =
sdk.development.force_analyze_entire_repo = false
sdk.development.suppress_update_message = false
sdk.development.detect_with_pip_freeze = false
sdk.development.worker.report_period_sec = 2
sdk.development.worker.ping_period_sec = 30
sdk.development.worker.log_stdout = true
sdk.development.worker.report_global_mem_used = false
agent.worker_id = 12gb-robin:gpu0
agent.worker_name = 12gb-robin
agent.force_git_ssh_protocol = false
agent.python_binary =
agent.package_manager.type = conda
agent.package_manager.pip_version = <20.2
agent.package_manager.system_site_packages = true
agent.package_manager.force_upgrade = false
agent.package_manager.conda_channels.0 = pytorch
agent.package_manager.conda_channels.1 = conda-forge
agent.package_manager.conda_channels.2 = defaults
agent.package_manager.priority_optional_packages.0 = pygobject
agent.package_manager.torch_nightly = false
agent.package_manager.conda_env_as_base_docker = false
agent.venvs_dir = /root/.clearml/venvs-builds
agent.venvs_cache.max_entries = 10
agent.venvs_cache.free_space_threshold_gb = 2.0
agent.venvs_cache.path = ~/.clearml/venvs-cache
agent.vcs_cache.enabled = true
agent.vcs_cache.path = /root/.clearml/vcs-cache
agent.venv_update.enabled = false
agent.pip_download_cache.enabled = true
agent.pip_download_cache.path = /root/.clearml/pip-download-cache
agent.translate_ssh = true
agent.reload_config = false
agent.docker_pip_cache = /root/.clearml/pip-cache
agent.docker_apt_cache = /root/.clearml/apt-cache
agent.docker_force_pull = false
agent.default_docker.image = nvidia/cuda:11.6.0-cudnn8-runtime-ubuntu20.04
agent.default_docker.arguments.0 = --ipc=host
agent.default_docker.arguments.1 = --privileged
agent.enable_task_env = false
agent.hide_docker_command_env_vars.enabled = true
agent.hide_docker_command_env_vars.parse_embedded_urls = true
agent.abort_callback_max_timeout = 1800
agent.docker_internal_mounts.sdk_cache = /clearml_agent_cache
agent.docker_internal_mounts.apt_cache = /var/cache/apt/archives
agent.docker_internal_mounts.ssh_folder = ~/.ssh
agent.docker_internal_mounts.ssh_ro_folder = /.ssh
agent.docker_internal_mounts.pip_cache = /root/.cache/pip
agent.docker_internal_mounts.poetry_cache = /root/.cache/pypoetry
agent.docker_internal_mounts.vcs_cache = /root/.clearml/vcs-cache
agent.docker_internal_mounts.venv_build = ~/.clearml/venvs-builds
agent.docker_internal_mounts.pip_download = /root/.clearml/pip-download-cache
agent.apply_environment = true
agent.apply_files = true
agent.custom_build_script =
agent.git_user =
agent.extra_docker_arguments.0 = --privileged
agent.extra_docker_shell_script.0 = apt-get install -y s3fs
agent.default_python = 3.8
agent.cuda_version = 116
agent.cudnn_version = 0

Executing task id [c9cc71be12cc4bd68b01bdd15f18ddab]:
repository =
branch =
version_num =
tag =
docker_cmd = nvidia/cuda:11.6.0-cudnn8-runtime-ubuntu20.04 --network host
entry_point = interactive_session.py
working_dir = .

clearml_agent: ERROR: ERROR: package manager "conda" selected, but 'conda' executable could not be located `

  
  
Posted one year ago

Thanks for this information! Then I know that it must have something to do with the conda installation. I will dig deeper on that and will post an update, if I’m successful - maybe it would be helpful for someone else in the future

  
  
Posted one year ago

Shouldn't matter, since this is done during the virtualenv installation - either on the machine the agent is running on (no docker), or inside the docker container

  
  
Posted one year ago

Yes, will do! Does it matter for the agent if it runs in docker mode? I think not, right?

  
  
Posted one year ago