Reputation
Badges 1
35 × Eureka!I mean in the clearml-server docker
no because every user that is trying to write in the bucket has the same credentials
another thing: I had to change 8081
to 8085
since it was already used
oh but docker-ps
shows me 8081 ports for webserver, apiserver and fileserver containers
` CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0b3f563d04af allegroai/clearml:latest "/opt/clearml/wrappe…" 7 minutes ago Up 7 minutes 8008/tcp, 8080-8081/tcp, 0.0.0.0:8080->80/tcp, :::8080->80/tcp clear...
so I can run the experiments, I can see them, but no plots are saved because there is an upload problem when uploading to localhost:8085
that depends…would that only keep the latest version of each file?
Hi AgitatedDove14 , I’m talking about the following pip install.
After that pip install, it displays agent’s conf, shows installed packages, and launches the task (no installation)
` Running in Docker mode (v19.03 and above) - using default docker image: spoter ['-e CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL=1', '-e CLEARML_AGENT_SKIP_PIP_VENV_INSTALL=1']
Running task '3ebb680b17874cda8dc7878ddf6fa735'
Storing stdout and stderr log to '/tmp/.clearml_agent_out.tsu2tddl.txt', '/tmp/.clearml_agent_o...
before the repo was already in the docker, but now it is running the agent inside the docker (so setting a virtualenv, and cloning the repo, and installing the packages)
the problem was docker, that had as entrypoint a bash script with python train.py --epochs=300
hardcoded, so I guess it was never acutally running the task setup from clearml.
great! and I saw that there were some system packages needed for opencv that were installed automatically that could be turned off. Now I’m just wondering if I could remove the PIP install at the very beginning, so it starts straightaway
not that much, I was just wondering if it was possible :-)
Thanks TimelyPenguin76 for your answer! So indeed it was mounting it, and how do I check that “CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL” is working in my agent in docker?
Right, but there is a lot of free space (257 GB) in the home folder
ok, I entered the container, replaced all 8081 to 8085 in every file, commited the container and changed the docker-compose.yml
to use that image instead of the allegroai/clearml:latest
and now it works 🙂
I could map the root folder of the repo into the container, but that would mean everything ends up in there
so when inside the docker, I don’t see the git repo and that’s why ClearML doesn’t see it
I’m suggesting MagnificentWorm7 to do that yes, instead of adding the files to a ClearML dataset in each step
would it be possible to change de dataset.add_files to some function that moves your files to a common folder (local or cloud), and then use the last step in the dag to create the dataset using that folder?
there is no /usr/share/elasticsearch/logs/clearml.log
file (neither inside the container nor in my server)
it would be easier for a sysadmin to center the credentials of the bucket in the clearml-server, without the need to distribute them…every user in the server has the same credentials, and they don’t need to know them..makes sense?