Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hello. I Am Running Clearml Server And Agents In K8S Using The Helm Charts. The Clearml Server Came Preconfigured With The 2 Queues: 'Default' And 'K8S_Scheduler'. I Have Created One More Queue 'Services' And Deployed 1 Agent For The 'Default' Queue And

Hello.

I am running clearml server and agents in k8s using the helm charts. The clearml server came preconfigured with the 2 queues: 'default' and 'k8s_scheduler'.

I have created one more queue 'services' and deployed 1 agent for the 'default' queue and one for the 'services' queue.

Anything I invoke with remote execution ends up in the 'k8s_scheduler' queue for some reason and I always specify the queue in which the execution should be placed.

Anyone knows why that might be?

  
  
Posted 2 years ago
Votes Newest

Answers 19


If I'm not wrong you can simply label the namespace to avoid istio to get there

  
  
Posted 2 years ago

happy to hear that

  
  
Posted 2 years ago

ok so they are executed as expected

  
  
Posted 2 years ago

yes the pods are starting

  
  
Posted 2 years ago

ok, i'll try to fix the connection issue. Thank you for the help πŸ™‚

  
  
Posted 2 years ago

it looks not from logs

  
  
Posted 2 years ago

yw πŸ™‚

  
  
Posted 2 years ago

at task completion do you get state Completed in UI?

  
  
Posted 2 years ago

I will try to fix that. But what is the purpose of the 'k8s_scheduler' queue?

  
  
Posted 2 years ago

k8s cluster can access ubuntu archive?

  
  
Posted 2 years ago

So it seems it starts on the queue I specify and then it gets moved to the k8s_scheduler queue.

So the experiment starts with the status "Running" and then once moved to the k8s_scheduler queue it stays in "Pending"

  
  
Posted 2 years ago

not totally sure tbh

  
  
Posted 2 years ago

it’s a queue used by the agent just for internal scheduling purposes

  
  
Posted 2 years ago

but I think this behaviour will hange in future releases

  
  
Posted 2 years ago

when tasks starts do you see clearml-id-* pod starting?

  
  
Posted 2 years ago

actually it does not because the pods logs show .

  
  
Posted 2 years ago

JuicyFox94 since I have you, the connection issue might be caused by the istio proxy. In order to disable the istio sidecar injection I must add an annotation to the pod.
https://github.com/allegroai/clearml-helm-charts/blob/main/charts/clearml-agent/templates/agentk8sglue-configmap.yaml#L8

Unfortunately there does not seem to be any field for that in the values file.

  
  
Posted 2 years ago

works without the istio proxy

  
  
Posted 2 years ago

yes that is possible but I do use istio for the clearml server components. I can move the agents to a separate namespace. I will try that

  
  
Posted 2 years ago
1K Views
19 Answers
2 years ago
one year ago
Tags
Similar posts