Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hello. I Am Running Clearml Server And Agents In K8S Using The Helm Charts. The Clearml Server Came Preconfigured With The 2 Queues: 'Default' And 'K8S_Scheduler'. I Have Created One More Queue 'Services' And Deployed 1 Agent For The 'Default' Queue And

Hello.

I am running clearml server and agents in k8s using the helm charts. The clearml server came preconfigured with the 2 queues: 'default' and 'k8s_scheduler'.

I have created one more queue 'services' and deployed 1 agent for the 'default' queue and one for the 'services' queue.

Anything I invoke with remote execution ends up in the 'k8s_scheduler' queue for some reason and I always specify the queue in which the execution should be placed.

Anyone knows why that might be?

  
  
Posted one year ago
Votes Newest

Answers 19


it looks not from logs

  
  
Posted one year ago

JuicyFox94 since I have you, the connection issue might be caused by the istio proxy. In order to disable the istio sidecar injection I must add an annotation to the pod.
https://github.com/allegroai/clearml-helm-charts/blob/main/charts/clearml-agent/templates/agentk8sglue-configmap.yaml#L8

Unfortunately there does not seem to be any field for that in the values file.

  
  
Posted one year ago

ok, i'll try to fix the connection issue. Thank you for the help πŸ™‚

  
  
Posted one year ago

at task completion do you get state Completed in UI?

  
  
Posted one year ago

actually it does not because the pods logs show .

  
  
Posted one year ago

k8s cluster can access ubuntu archive?

  
  
Posted one year ago

So it seems it starts on the queue I specify and then it gets moved to the k8s_scheduler queue.

So the experiment starts with the status "Running" and then once moved to the k8s_scheduler queue it stays in "Pending"

  
  
Posted one year ago

If I'm not wrong you can simply label the namespace to avoid istio to get there

  
  
Posted one year ago

I will try to fix that. But what is the purpose of the 'k8s_scheduler' queue?

  
  
Posted one year ago

when tasks starts do you see clearml-id-* pod starting?

  
  
Posted one year ago

ok so they are executed as expected

  
  
Posted one year ago

works without the istio proxy

  
  
Posted one year ago

not totally sure tbh

  
  
Posted one year ago

yw πŸ™‚

  
  
Posted one year ago

it’s a queue used by the agent just for internal scheduling purposes

  
  
Posted one year ago

yes that is possible but I do use istio for the clearml server components. I can move the agents to a separate namespace. I will try that

  
  
Posted one year ago

but I think this behaviour will hange in future releases

  
  
Posted one year ago

yes the pods are starting

  
  
Posted one year ago

happy to hear that

  
  
Posted one year ago
802 Views
19 Answers
one year ago
one year ago
Tags
Similar posts