
Reputation
Badges 1
35 × Eureka!if I were to run an agent that would require to install pandas at some point I’d run it:OPENBLAS="$(brew --prefix openblas)" clearml-agent daemon --queue default
for example I had to do a OPENBLAS="$(brew --prefix openblas)" pip install pandas
to be able to install pandas on my M1 MAC
right, I’m saying I had to do that in my MAC. In your case you would have to point it to somewhere else. Please check where openblas is installed on your ubuntu
where is the dataset stored? maybe you deleted the credentials by mistake? or maybe you are not installing the libraries needed (for example if using AWS you need boto3, if GCP you need google-cloud-storage)
line 120 says unmark to enable venv caching (it comes commented by default, but since I’m copying my conf it isn’t commented there)
also I suggested to change TMPDIR env variable, since /tmp/ didn’t have a lot of space.
agent.environment.TMPDIR = ****
is it ok to see *
**
*
instead of the actual path?
Hi AgitatedDove14 , I’m talking about the following pip install.
After that pip install, it displays agent’s conf, shows installed packages, and launches the task (no installation)
` Running in Docker mode (v19.03 and above) - using default docker image: spoter ['-e CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL=1', '-e CLEARML_AGENT_SKIP_PIP_VENV_INSTALL=1']
Running task '3ebb680b17874cda8dc7878ddf6fa735'
Storing stdout and stderr log to '/tmp/.clearml_agent_out.tsu2tddl.txt', '/tmp/.clearml_agent_o...
how do I mount my local ssh folder into /root/.ssh/
docker when running clearml-agent?
also, is there a way for it to not install the requirements, and simply run the task?
Thanks TimelyPenguin76 for your answer! So indeed it was mounting it, and how do I check that “CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL” is working in my agent in docker?
Thanks for the answer. You’re right. I forgot to add that this tasks runs inside a docker container and I’m currently only mapping the $PWD ( ml
folder) into /app folder in the container.
there is no /usr/share/elasticsearch/logs/clearml.log
file (neither inside the container nor in my server)
I’m suggesting MagnificentWorm7 to do that yes, instead of adding the files to a ClearML dataset in each step
I could map the root folder of the repo into the container, but that would mean everything ends up in there
another thing: I had to change 8081
to 8085
since it was already used
so when inside the docker, I don’t see the git repo and that’s why ClearML doesn’t see it
can you share your clearml.conf
file (remove the critical information first)?
That’s why I’m suggesting him to do that 🙂
Hi! What the error is saying is that it is looking for the the ctbc/image_classification_CIFAR10.py
file in your repo.
So when you created the task you were inside a git repo, and ClearML assumed that all your files in it were commited and pushed. However your repo https://github.com/gradient-ai/PyTorch.git doesn’t contain these files
the problem was docker, that had as entrypoint a bash script with python train.py --epochs=300
hardcoded, so I guess it was never acutally running the task setup from clearml.
so I removed the entrypoint, and now I can see that it tries to install the packages, but it fails because it can’t download the repo
Currently I’m changing /opt/ for my home folder
before the repo was already in the docker, but now it is running the agent inside the docker (so setting a virtualenv, and cloning the repo, and installing the packages)
I also changed the permissions of /usr/share/elasticsearch
according to this post: https://techoverflow.net/2020/04/18/how-to-fix-elasticsearch-docker-accessdeniedexception-usr-share-elasticsearch-data-nodes/ , but I’m getting the same error
just do:import os.path as op dataset_folder = Dataset.get(dataset_id="...").get_local_copy() csv_file = op.join(dataset_folder, 'salary.csv')