Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi, I Would Like To Understand How I Can Set The Pip Cache Location For My Agent, I Thought That I Already Had The Right Setting With

Hi, I would like to understand how I can set the pip cache location for my agent, I thought that I already had the right setting with docker_internal_mounts.pip_cache . Is there anything else I have to set?
chown: cannot access '/root/.cache/pip': No such file or directory

docker_internal_mounts { pip_cache: "/clearml-cache/pip-cache" ... }

  
  
Posted 2 years ago
Votes Newest

Answers 30


Hi, I would like to understand how I can set the pip cache location for my agent,

ClumsyElephant70 by default the pip cache (and all other cache folders) are mounted back into the host itself ~/.clearml/
I'm assuming the idea is shared cache, if this is the case, do:
docker_pip_cache = ~/my_shared_nfs/pip-cachehttps://github.com/allegroai/clearml-agent/blob/e3e6a1dda81bee2dd20a64d09746568e415f1823/docs/clearml.conf#L139

  
  
Posted 2 years ago

I want to cache as much as possible and /clearml-cache/venvs-cach (on the host) does contain caches venvs. But /clearml-cache/venvs-builds is empty. My question was how to also cache venvs_builds

  
  
Posted 2 years ago

Because I think you need to map out the pip cache folder to the docker

  
  
Posted 2 years ago

AgitatedDove14 one more thing regarding the initial question,
apt-cache , pip-cache , pip-download-cache , vcs-cache and venvs-cache contain data on the shared clearml-cache but venvs-build does not? What sort of data would be stored in the venvs-build folder? I do have venvs_dir = /clearml-cache/venvs-builds specified in the clearml.conf

  
  
Posted 2 years ago

it appears at multiple places. Seems like the mapping of pip and apt cache does work but the access rights are now an issue

  
  
Posted 2 years ago

Ok it is more a docker issue, I guess it is not feasible reading the thread.

  
  
Posted 2 years ago

They all want to be ubuntu:gpu0. Any idea how I can randomize it? Setting the CLEARML_WORKER_ID env var somehow does not work

You should not have this entry in the conf file, the "worker_id" should be unique (and is based on the "worker_name" as a prefix. You can control it via env variales:
CLEARML_WORKER_ID

  
  
Posted 2 years ago

In theory it should have worked.
Can you send me the full Task log? (with cache and everything?)
I suspect since these are not the default folders, something is misconfigured / missing
(you can DM the log, so it won't end on a public the channel))

  
  
Posted 2 years ago

the cache on the host is mounted as nfs and the nfs server was configured to not allow the clients to do root operations

  
  
Posted 2 years ago

or only not for apt and pip?

  
  
Posted 2 years ago

` # pip cache folder mapped into docker, used for python package caching
docker_pip_cache = /clearml-cache/pip-cache
# apt cache folder mapped into docker, used for ubuntu package caching
docker_apt_cache = /clearml-cache/apt-cache

docker_internal_mounts {
     apt_cache: "/clearml-cache/apt-cache"
     pip_cache: "/clearml-cache/pip-cache"
     vcs_cache: "/clearml-cache/vcs-cache"
     venv_build: "/clearml-cache/venvs-builds"
     pip_download: "/clearml-cache/pip-download-cache"
     ssh_folder: "/clearml-cache/ssh-cache"
} `
  
  
Posted 2 years ago

So it should cache the venvs right?

Correct,

path: /clearml-cache/venvs-cache

Just making sure, this is the path to the host cache folder

ClumsyElephant70 I think I lost track of the current issue 😞 what's exactly not being cached (or working)?

  
  
Posted 2 years ago

probably found the issue

  
  
Posted 2 years ago

So I don't need docker_internal_mounts at all?

  
  
Posted 2 years ago

The agents also share the clearml.conf file which causes some issue with the worker_id/worker_name. They all want to be ubuntu:gpu0. Any idea how I can randomize it? Setting the CLEARML_WORKER_ID env var somehow does not work

  
  
Posted 2 years ago

Hi ClumsyElephant70 ,
What about
# pip cache folder mapped into docker, used for python package caching docker_pip_cache = ~/.clearml/pip-cache # apt cache folder mapped into docker, used for ubuntu package caching docker_apt_cache = ~/.clearml/apt-cache

  
  
Posted 2 years ago

Try running with all them marked out so it will take defaults

  
  
Posted 2 years ago

Exactly, all agents should share the cache that is mounted via nfs. I think it is working now 🙂

  
  
Posted 2 years ago

Hey Natan, good point! But I have actually set both

  
  
Posted 2 years ago

are they in conflict?

  
  
Posted 2 years ago

Hi AgitatedDove14 one more question about efficient caching, is it possible to cache/share docker images between agents?

  
  
Posted 2 years ago

I think you need to map internal docker pip cache to /root/.cache/pip

  
  
Posted 2 years ago

I do have this setting in my clearml.conf file
venvs_cache: { free_space_threshold_gb: 50.0 path: /clearml-cache/venvs-cache }So it should cache the venvs right? I also see content in the /clearml-cache/venvs-cache folder. Because I have venvs_cache configured there is nothing in venvs-build, since it uses the cache?

  
  
Posted 2 years ago

What sort of data would be stored in the

venvs-build

folder?

ClumsyElephant70 temporary (lifetime of the task execution) virtual environment, including the code etc. It is deleted and recreated for every new task launched (or restored from cache, if venvs_cache is enabled)

  
  
Posted 2 years ago

so now there is the user conflict between the host and the agent inside the container

  
  
Posted 2 years ago

Can you add a bit more from the log for more context as well?

  
  
Posted 2 years ago

hm... Now with commenting it out I have the following problem:
docker_pip_cache = /clearml-cache/pip-cache
On host:
drwxrwxrwx 5 root root 5 Mar 10 17:17 pip-cache

in task logs:
chown: changing ownership of '/root/.cache/pip': Operation not permitted

  
  
Posted 2 years ago

is it possible to cache/share docker images between agents?

Like a shared folder for docker pulled images?
https://forums.docker.com/t/how-to-share-the-images-at-all-the-local-hosts/24894/7
you might be able to share "/var/lib/docker/image" but I'm not sure how stable it is (definitely risky)

  
  
Posted 2 years ago

W: chown to _apt:root of directory /var/cache/apt/archives/partial failed - SetupAPTPartialDirectory (1: Operation not permitted) W: chmod 0700 of directory /var/cache/apt/archives/partial failed - SetupAPTPartialDirectory (1: Operation not permitted) Collecting pip==20.1.1

  
  
Posted 2 years ago

Executing: ['docker', 'run',......] chown: changing ownership of '/root/.cache/pip': Operation not permitted Get:1 focal-security InRelease [114 kB] Get:2 focal InRelease [265 kB] Get:3 focal-updates InRelease [114 kBIt is at the top of the logs

  
  
Posted 2 years ago