Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
I Just Deployed Clearml Into K8 Cluster Using Clearml Helm Package. When I Ran A Job, It Gave This Error In The Clearml Web Server (Attached Below). I Sshed Into The Pod Running The Clearml-Agent. Upon Typing Clearml-Agent Init, I Realised The Clearml.Con

I just deployed clearml into k8 cluster using clearml helm package. When i ran a job, it gave this error in the clearml web server (Attached below). I sshed into the pod running the clearml-agent. Upon typing clearml-agent init, i realised the clearml.conf is empty and also I am not able to configure it (it is write protected). I am only able to configure the clearml.conf in the agentservices node.
So i am unable to run a clearml-agent within the k8 system, however, things are working if the clearml-agent is outisde the k8 system (my laptop). Do let me know if there is some way to debug this ?

  
  
Posted 3 years ago
Votes Newest

Answers 34


hi FriendlySquid61 , The clearml-agent got filled up due to values.yaml file. However, agentservices was empty so I filled it up manually..

  
  
Posted 3 years ago

DeliciousBluewhale87

Upon ssh-ing into the folders in the both the physical node (/opt/clearml/agent) and the pod (/root/.clearml), it seems there are some files there..

Hmm that means it is working...
Do you see there a *.conf files? What do they contain? (it point to the correct clearml-server config)

  
  
Posted 3 years ago

Ohh okay something seems to half work in terms of configuration, the agent has enough configuration to register itself, but fails to pass it to the task.
Can you test with the latest agent RC:
0.17.2rc4

  
  
Posted 3 years ago

clearml-agent deployment file

What do you mean by that? is that the helm of the agent ?

  
  
Posted 3 years ago

I understand, but for some reason you are getting an error about the clearml webserver. try changing the value in the values.yaml file for the agent.clearmlWebHost to the same value you filled manually for the agent-services Web host

  
  
Posted 3 years ago

Ah kk, it is ---laptop:0 worker is no more now.. But wrt to our original qn, I can see the agent(worker) in the clearml-server UI ..

  
  
Posted 3 years ago

Hi DeliciousBluewhale87
My theory is that the clearml-agent is configured correctly (which means you see it in the clearml-server). The issue (I think) is that the Task itself (running inside the docker) is missing the configuration. The way the agent passes the configuration into the docker is by mapping a temporary configuration file into the docker itself. If the agent is running bare-metal, this is quite straight forward. If the agent is running on k8s (or basically inside a docker) then the agent needs:
Mapping of the docker socket Mapping of a Host folder into the agent's docker(1) Is used to actually execute docker run , while (2) is used to pass information (a.k.a configuration files) from the Agent's docker into the Task's docker.
The CLEARML_AGENT_DOCKER_HOST_MOUNT environment is the one that tells the Agents how it can pass these config files:
You can see in the example here:
https://github.com/allegroai/clearml-server/blob/6434f1028e6e7fd2479b22fe553f7bca3f8a716f/docker/docker-compose.yml#L144
We also have to mount a folder :
so that the docker will be able to mount the config files into the docker
https://github.com/allegroai/clearml-server/blob/6434f1028e6e7fd2479b22fe553f7bca3f8a716f/docker/docker-compose.yml#L147
Notice that this is not actually a PVC as there is no need for persistency, this is just a way to run a sibling docker.

Make sense?

  
  
Posted 3 years ago

By the way, are you editing the values directly? Why not use the values file?

  
  
Posted 3 years ago

Something is wierd.. It is showing workers which are not running now...

  
  
Posted 3 years ago

Yup, i used the value file for the agent. However, i manually edited for the agentservices (as there was no example for it in the github).. Also I am not sure what is the CLEARML_HOST_IP (left it empty)

  
  
Posted 3 years ago

This is from my k8 cluster. Using the clearml helm package, I was able to set this up.

  
  
Posted 3 years ago

Can you try removing the port from the webhost?

  
  
Posted 3 years ago

Yeah, I restarted the deployment and sshed into the host machine also.. (Img below)

  
  
Posted 3 years ago

Nothing changed.. the clearml.conf is still as is (empty)

  
  
Posted 3 years ago

DeliciousBluewhale87 and is it working?

  
  
Posted 3 years ago

Ohhh yes, that the problem

  
  
Posted 3 years ago

That's the agent-services one, can you check the agent's one?

  
  
Posted 3 years ago

For the clearml-agent deployment file, I updated this line
python3 -m pip install clearml-agent==0.17.2rc4and restarted the deployment. However the conf file is still empty.

Should I also update the clearml-agent-services as well in the clearml-agent-services deployment file ?

  
  
Posted 3 years ago

image

  
  
Posted 3 years ago

image

  
  
Posted 3 years ago

image

  
  
Posted 3 years ago

Hi DeliciousBluewhale87
clearml-agent 0.17.2 was just release with the fix, let me know if it works

  
  
Posted 3 years ago

It might be that the worker was killed before unregistered, you will see it there but the last update will be stuck (after 10min it will be automatically removed)

  
  
Posted 3 years ago

Hi martin, i just untemplate-ed the
helm template clearml-server-chart-0.17.0+1.tgzI found this lines inside.
- name: CLEARML_AGENT_DOCKER_HOST_MOUNT value: /opt/clearml/agent:/root/.clearmlUpon ssh-ing into the folders in the both the physical node (/opt/clearml/agent) and the pod (/root/.clearml), it seems there are some files there.. So the mounting worked, it seems.
I am not sure, I get your answer. Should i change the values to something else ?
Thanks

  
  
Posted 3 years ago

Yup, tried that.. Same error also

  
  
Posted 3 years ago

I did update it to clearml-agent 0.17.2 , however the issue still persists for this long-lasting service pod.
However, this issue is no more when trying to dynamically allocate pods using the Kubernetes Glue.
k8s_glue_example.py

  
  
Posted 3 years ago

Hi AgitatedDove14 , I also fiddled around by changing this line and restarted the deployment. But this just causes it revert back 0.17.2rc4 again.
python3 -m pip install clearml-agent==0.17.2rc3

  
  
Posted 3 years ago

Is the agent itself registered on the clearml-server (a.k.a can you see it in the UI?)

  
  
Posted 3 years ago

I just changed the yaml file of clearml-agent to get it to start with the above line.
python3 -m pip install clearml-agent==0.17.2rc4

  
  
Posted 3 years ago

image

  
  
Posted 3 years ago
537 Views
34 Answers
3 years ago
6 days ago
Tags
Similar posts