Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
I Dont Exactly Know How To Ask For Help On This... Nor Have A Reproducible Minimal Example... I Downgraded Back To 1.15.1 From 1.16.2 And Have The Same Issue There. I Have A Pipeline That'S Repeatedly Failing To Complete. It Correctly Marks Things As Cach

i dont exactly know how to ask for help on this... nor have a reproducible minimal example...
I downgraded back to 1.15.1 from 1.16.2 and have the same issue there.
I have a pipeline that's repeatedly failing to complete. it correctly marks things as cached, and then just doesnt execute the last step. The task stays "Running" forever, but disappears - the worker just has a process that dies. CPU/RAM aren't running out or anything like that at all. Prior runs of this pipeline worked just fine (less cached then too).

Has anyone seen this kind of "disappearing / zombie task" state before? It's very perplexing to me.

  
  
Posted one month ago
Votes Newest

Answers 43


I will ask internally about this

  
  
Posted one month ago

would it be on the pipeline task itself then, since that's what's disappearing? that likely the case

  
  
Posted one month ago

yeah, it just shows what I see in the Console, but then immediately goes back to polling for more work (so... instead of running backtest, it exits, no completion message)

  
  
Posted one month ago

would it be on the pipeline task itself then, since that's what's disappearing?
I will do some experiment comparisons and see if there are package diffs. thanks for the tip.

  
  
Posted one month ago

that's the final screenshot. it just shows a bunch of normal "launching ..." steps, and then stops all the sudden.

  
  
Posted one month ago

the workers connect to the clearml server via ssh-tunnels, so they all talk to "localhost" despite being deployed in different places. each task creates artifacts and metrics that are used downstream

  
  
Posted one month ago

when i run the pipe locally, im using the same connect.sh script as the workers are in order to poll the apiserver via the ssh tunnel.

  
  
Posted one month ago

but maybe here's a clue. after hanging like that for a while... it seems like the agent restarts (the container it runs in does not)
image

  
  
Posted one month ago

I really can't provide a script that matches exactly (though I do plan to publish something like this soon enough), but here's one that's quite close / similar in style:
None where I tried function-steps out instead, but it's a similar architecture for the pipeline (the point of the example was to show how to do a dynamic pipeline)

  
  
Posted one month ago

enqueuing. pipe.start("default") but I think it's picking up on my local clearml install instead of what I told it to use.

my tasks have this in them... what's the equivalent for pipeline controllers?
image

  
  
Posted one month ago

default queue is served with (containerized + custom entrypoint) venv workers (agent services just wasn't working great for me, gave up)

  
  
Posted one month ago

Hi @<1689446563463565312:profile|SmallTurkey79> , when this happens, do you see anything in the API server logs? How is the agent running, on top of K8s or bare metal? Docker mode or venv?

  
  
Posted one month ago

let me downgrade my install of clearml and try again.

  
  
Posted one month ago
2K Views
43 Answers
one month ago
one month ago
Tags