Reputation
Badges 1
611 × Eureka!Oh, interesting!
So pip version on per task basis makes sense ;D?
I am still trying to solve the add_requirements + importlib combo. If I use detect_with_freeze I can not use add_requirements and if I use automatic code analysis it will not find all packages because of importlib .
For now I come to the conclusion, that keeping a requirements.txt and making clearml parse the requirements from there should be the most robust solution. Unfortunately, there seems to be no way to do this with Task.init .
` ocker-compose ps
Name Command State Ports
clearml-agent-services /usr/agent/entrypoint.sh Restarting
clearml-apiserver /opt/clearml/wrapper.sh ap ... Up 0.0.0.0:8008->8008/tcp, 8080/tcp, 8081/tcp ...
btw: why is agent.package_manager and agent attribute. Imo it does not make sense because conda can install pip packages, but pip cannot install conda packages which can lead to install failures, right?
Wait, nvm. I just tried it again and now it worked.
Thank you for clearing that up 🙂
So just tried again and still it does not work.
This is what is in .ssh on my clearml-agent-rw------- 1 tim tim 1,5K Apr 8 14:28 authorized_keys -rw-rw-r-- 1 tim tim 208 Apr 29 11:15 config -rw------- 1 tim tim 432 Apr 8 14:53 id_ed25519 -rw-r--r-- 1 tim tim 119 Apr 8 14:53 id_ed25519.pub -rw------- 1 tim tim 432 Apr 29 11:16 id_gitlab -rw-r--r-- 1 tim tim 119 Apr 29 11:25 id_gitlab.pub -rw-rw-r-- 1 tim tim 3,1K Apr 29 11:33 known_hosts
I have a related question: I read here that 4GB is a http limitation and ClearML will not chunk single files. I take from that, that ClearML did not want/there was no need to implement an own solution so far. But what about models that are larger than 4GB?
Okay, I found something out: When I use docker image ubuntu:22.04 it does not spin up a service agent and aborts the task. When I used python:latest everything works fine!
One question: Does clearml resolve the CUDA Version from driver or conda?
Nvm. I think I understood. When the file has never been added to repository it is not tracked.
I am going to try it again and send you the relevant part of the logs in a minute. Maybe I am interpreting something wrong.
Another example on what I would expect:
` ### start_carla.py
def get_task():
task = Task.init(project_name="examples", task_name="start-carla", task_type="application")
# experiment is not run here. The experiment is only run when this is executed as standalone or on a clearml-agent.
return task
def run_experiment(task):
...
This task can also be run as standalone or run by a clearml-agent
if name == "main":
task = get_task()
run_experiment(task)
run_pi...
Exactly. I don't want people to circumvent the queue 🙂
Thank you. Seems like someone implemented a type check Error: Dataset id=8d7355655830427f9243671c8cf0a6b0 is not of type Dataset :)
Artifact Size: 74.62 MB
I will debug this myself a little more.
Maybe related question: Will there be some documentation about clearml internals with the new documentation? ClearML seems to store stuff that's relevant to script execution outside of clearml.Task if I am not mistaken. I would like to learn a little bit about what the code structure / internal mechanism is.
CostlyOstrich36 Actually no container exits, so I guess if it s because of OOM like SuccessfulKoala55 implies, than maybe a process inside the container gets killed and the container will hang? Is this possible?
SuccessfulKoala55 I did not observe elastic to use much RAM (at least right after starting). Doesn't this line in the docker-compose control the RAM usage?ES_JAVA_OPTS: -Xms2g -Xmx2g -Dlog4j2.formatMsgNoLookups=true
These are the errors I get if I use file_servers without a bucket ( s3://my_minio_instance:9000 )
2022-11-16 17:13:28,852 - clearml.storage - ERROR - Failed creating storage object Reason: Missing key and secret for S3 storage access ( ) 2022-11-16 17:13:28,853 - clearml.metrics - WARNING - Failed uploading to ('NoneType' object has no attribute 'upload_from_stream') 2022-11-16 17:13:28,854 - clearml.storage - ERROR - Failed creating storage object ` Reason: Missing key...
I use fixed users!
It seems like the services-docker is always started with Ubuntu 18.04, even when I usetask.set_base_docker( "continuumio/miniconda:latest -v /opt/clearml/data/fileserver/:{}".format( file_server_mount ) )
Is this working in the latest version? clearml-agent falls back to /usr/bin/python3.8 no matter how I configure clearml.conf Just want to make sure, so I can investigate what's wrong with my machine if it is working for you.