Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Is It Possible To Increase The Polling Interval For K8S Glue? Currently It Is 5 Seconds I Believe. Would Adding An Argument For It Help? Can Do A Pr If So

Is it possible to increase the polling interval for k8s glue? Currently it is 5 seconds I believe. Would adding an argument for it help? Can do a PR if so

  
  
Posted 2 years ago
Votes Newest

Answers 17


And then comes back again

  
  
Posted 2 years ago

Like it said, it works, but goes into the error loop

  
  
Posted 2 years ago

kubectl get pods -n {namespace} -o=JSONWhat are you getting when running the above on your cluster ?

  
  
Posted 2 years ago

Planning to exec into the container and run it in a loop and see what happens

  
  
Posted 2 years ago

Let me know :)

  
  
Posted 2 years ago

Nope, that doesn’t seem to be it. Will debug a bit more.

  
  
Posted 2 years ago

5 seconds will be a sleep between two consecutive pulls where there are no jobs to process, why would you increase it to a higher pull freq ?

  
  
Posted 2 years ago

Good question 🙂

this is what I am seeing in the logs:

` No tasks in queue 9154efd8a1314550b1c7882981720861
No tasks in Queues, sleeping for 5.0 seconds
No tasks in queue 9154efd8a1314550b1c7882981720861
No tasks in Queues, sleeping for 5.0 seconds
No tasks in queue 9154efd8a1314550b1c7882981720861
No tasks in Queues, sleeping for 5.0 seconds
No tasks in queue 9154efd8a1314550b1c7882981720861
No tasks in Queues, sleeping for 5.0 seconds
K8S Glue pods monitor: Failed parsing kubectl output:

Ex: Expecting value: line 1 column 1 (char 0)
K8S Glue pods monitor: Failed parsing kubectl output:

Ex: Expecting value: line 1 column 1 (char 0)
K8S Glue pods monitor: Failed parsing kubectl output: `
This pattern repeats after a minute or so. Error for a while, normal output for a while. My guess is eks is throttling. Need to see how I can get the correct error.

  
  
Posted 2 years ago

Ex: Expecting value: line 1 column 1 (char 0)
K8S Glue pods monitor: Failed parsing kubectl output:

Run with --debug as the first parameter
Are you running the latest from the git repo ?

  
  
Posted 2 years ago

I am using the clearml-agent from pypi version

  
  
Posted 2 years ago

I saw that the debug param wasn’t adding anything additional for this?

  
  
Posted 2 years ago

Since it’s already logging this debug wouldn’t add anything?

  
  
Posted 2 years ago

Yep, you are right

  
  
Posted 2 years ago

This is the thread checking the state of the running pods (and updating the Task status, so you have visibility into the state of the pod inside the cluster before it starts running)

  
  
Posted 2 years ago

No idea why it fails...

  
  
Posted 2 years ago

(no objection to add an argument but, I just wonder what's the value)

  
  
Posted 2 years ago
517 Views
17 Answers
2 years ago
one year ago
Tags