Reputation
Badges 1
611 × Eureka!However, because of the import carla it is added to the task requirements and clearml-agent tries to install it, although it is meant to be included at runtime.
In the beginning my config file was not empty 😕
Thank you very much for the quick answer. Still so confusing to me that so many things are configured client side 😄
Ok. I just wanted to make sure I have configured my agent properly. Just want to make sure I have to set it on all agents.
So actually deleting from client (e.g. an dataset with clearml-data) works.
By host you mean the machine on which the agent is running? How does clearml-agent find the cuda_version?
That I understand. But I think (old) pip versions will sometimes not resolve a package. Probably not the case the other way around.
Okay, no worries. I will check first. Thanks for helping!
Could be clean log after restart. Unfortunately, I restarted the server right away 😞 I gonna post if it happens again with the appropriate logs.
Depends on how you start the task afaik. I think clearml-task uses requirements.txt by default, but otherwise clearml will parse your files dependencies or if you changed in clearml.conf it will use your conda/pip environment to generate the requirements.
When I select many experiments it will only delete some and show an error message, that some could not be deleted. But if I only select a few, everything works fine.
Currently, my solution is to create an "agent-git" account and users can give read-access to this account which the clearml-agent then uses to clone. However, I find access-tokens to be a better solution. Unfortunately, clearml-agent removes the token from the git url
I just tried to envrionment setup steps that clearml-agent is doing locally, but with my environment.yml instead of the one that clearml generates.
And how do I specify this in the output_uri ? The default file server is specified by passing True . How would I specify to use the second?
It seems like this is a bug however or is something like this to be expected? There shouldn't be files that are not shown in the WebUI..?
@<1576381444509405184:profile|ManiacalLizard2> Thank you, but afaik this only works locally and not if you run your task on a clearml-agent!
I can put anything there: s3://my_minio_instance:9000 /bucket_that_does_not_exist and it will work.
I don't think so. It is related to issue with the clearml-server I posted in the other thread. Essentially the clearml-server hangs, then I restart it with docker-compose down && docker-compose up -d and the experiments sometimes show as running, but on the clearml-agents I see that actually nothing is running or they show as aborted.
I know that usually clearml-agents do not abort on server restart and just continue.
At least when you use docker containers the agent will reuse the existing python environment.
` apiserver:
command:
- apiserver
container_name: clearml-apiserver
image: allegroai/clearml:latest
restart: unless-stopped
volumes:
- /opt/clearml/logs:/var/log/clearml
- /opt/clearml/config:/opt/clearml/config
- /opt/clearml/data/fileserver:/mnt/fileserver
depends_on:
- redis
- mongo
- elasticsearch
- fileserver
- fileserver_datasets
environment:
CLEARML_ELASTIC_SERVICE_HOST: elasticsearch
CLEARML_...
Unfortunately, I do not know that. Must be before October 2021 at least. I know I asked here how to use the preinstalled version and AgitatedDove14 helped me to get it work. But I cannot find the old thread 😕
Thank you for answering. So your suggestion would be similar to VexedCat68 's first idea, right?
Yes, that looks alright. Similar to before. Local execution works.
Thanks for your help again. I will just use detect_with_conda_freeze: true . Seems like a perfect solution for me!
Makes sense, but this means that we are not able to tell clearml-agent where to save on a per-task basis? I see the output_destination set correctly in clearml web interface, but as you say, clearml-agent always uses its api.fileserver ?