Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
I Dont Exactly Know How To Ask For Help On This... Nor Have A Reproducible Minimal Example... I Downgraded Back To 1.15.1 From 1.16.2 And Have The Same Issue There. I Have A Pipeline That'S Repeatedly Failing To Complete. It Correctly Marks Things As Cach

i dont exactly know how to ask for help on this... nor have a reproducible minimal example...
I downgraded back to 1.15.1 from 1.16.2 and have the same issue there.
I have a pipeline that's repeatedly failing to complete. it correctly marks things as cached, and then just doesnt execute the last step. The task stays "Running" forever, but disappears - the worker just has a process that dies. CPU/RAM aren't running out or anything like that at all. Prior runs of this pipeline worked just fine (less cached then too).

Has anyone seen this kind of "disappearing / zombie task" state before? It's very perplexing to me.

  
  
Posted one year ago
Votes Newest

Answers 43


Hi @<1689446563463565312:profile|SmallTurkey79> ! I will take a look at this and try to replicate the issue. In the meantime, I suggest you look into other dependencies you are using. Maybe some dependency got upgraded and the upgrade now triggers this behaviour in clearml.

  
  
Posted one year ago

None here's how I'm establishing worker-server (and client-server) comms fwiw

  
  
Posted one year ago

did you take a look at my connect.sh script? I dont think it's a problem since only the controller task is the problem.

Is there some sort of culling procedure that kills tasks by any chance? the lack of logs makes me think it's something like that.

I can also try different agent versions.

  
  
Posted one year ago

can you share the logs of the controller?

  
  
Posted one year ago

N/A (still shows as running despite Abort being sent)

  
  
Posted one year ago

it's pretty reliably happening but the logs are just not informative. just stops midway
image

  
  
Posted one year ago

are you running this locally or are you enqueueing the task (controller)?

  
  
Posted one year ago

I have tried other queues, they're all running the same container.
so far the only thing reliable is pipe.start_locally()

  
  
Posted one year ago

trying to run the experiment that kept failing right now, watching logs (they go by fast)... will try to spot anything anamolous

  
  
Posted one year ago

do you have any STATUS REASON under the INFO section of the controller task?

  
  
Posted one year ago

damn. I can't believe it. It disappeared again despite having 1.15.1 be the task's clearml version.
I'm going to try running the pipeline locally.

  
  
Posted one year ago

would it be on the pipeline task itself then, since that's what's disappearing?
I will do some experiment comparisons and see if there are package diffs. thanks for the tip.

  
  
Posted one year ago

thank you

  
  
Posted one year ago
125K Views
43 Answers
one year ago
one year ago
Tags