Can you actually reproduce my problem when also using conda_freeze: true
?
==> 2021-03-11 12:50:38 <==
# cmd: /home/tim/miniconda3/condabin/conda create --yes --mkdir --prefix /home/tim/.clearml/venvs-builds/3.8 python=3.8
--
==> 2021-03-11 12:50:40 <==
# cmd: /home/tim/miniconda3/condabin/conda install -p /home/tim/.clearml/venvs-builds/3.8 -c defaults -c conda-forge -c pytorch cudatoolkit=11.0 --quiet --json
--
==> 2021-03-11 12:50:43 <==
# cmd: /home/tim/miniconda3/condabin/conda install -p /home/tim/.clearml/venvs-builds/3.8 -c defaults -c conda-forge -c p...
Also I can see that clearml correctly loads the configSTORAGE S3BucketConfig(bucket='clearml', host='myhost:9000', key='mykey' secret='mysecret', token='', multipart=False, acl='', secure=True, region=None, verify=True, use_credentials_chain=False)
But this means the logger will use the default fileserver or not?
Is sdk.development.default_output_uri
used with s3://ip:9000/clearml or
ip:9000/clearml
?
This is the error I get from setting the logger upload destination.botocore.exceptions.ClientError: An error occurred (InvalidAccessKeyId) when calling the PutObject operation: The AWS Access Key Id you provided does not exist in our records.
` apiserver:
command:
- apiserver
container_name: clearml-apiserver
image: allegroai/clearml:latest
restart: unless-stopped
volumes:
- /opt/clearml/logs:/var/log/clearml
- /opt/clearml/config:/opt/clearml/config
- /opt/clearml/data/fileserver:/mnt/fileserver
depends_on:
- redis
- mongo
- elasticsearch
- fileserver
- fileserver_datasets
environment:
CLEARML_ELASTIC_SERVICE_HOST: elasticsearch
CLEARML_...
For example I get the following error if I simply clone and rerun:ERROR: Could not find a version that satisfies the requirement ruamel_yaml_conda>=0.11.14 (from conda==4.10.1->-r /tmp/cached-reqs6wtc73be.txt (line 28)) (from versions: none) ERROR: No matching distribution found for ruamel_yaml_conda>=0.11.14 (from conda==4.10.1->-r /tmp/cached-reqs6wtc73be.txt (line 28))
I see, so it is actually not related to clearml 🎉
In the first run the package only existed because it is preinstalled in the docker image. Afaik, in the second run it is also preinstalled, but pip will first try to resolve it and then see whether it already exists. But I am not to sure about this.
No no, I was just wondering how much effort it is to create something like ClearML. And your answer gives me a rough estimate 🙂
Haha, fortunately I have a good job already. Just wanted to know how many people are actively working on clearml.
First one is the original, second one the clone
Could you elaborate on that:
"So the agent failed to actually restore it from the git (files that are not added are not considered part of the git diff, this is usually git behavior)."
Thank you very much for the quick answer. Still so confusing to me that so many things are configured client side 😄