Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
I Dont Exactly Know How To Ask For Help On This... Nor Have A Reproducible Minimal Example... I Downgraded Back To 1.15.1 From 1.16.2 And Have The Same Issue There. I Have A Pipeline That'S Repeatedly Failing To Complete. It Correctly Marks Things As Cach

i dont exactly know how to ask for help on this... nor have a reproducible minimal example...
I downgraded back to 1.15.1 from 1.16.2 and have the same issue there.
I have a pipeline that's repeatedly failing to complete. it correctly marks things as cached, and then just doesnt execute the last step. The task stays "Running" forever, but disappears - the worker just has a process that dies. CPU/RAM aren't running out or anything like that at all. Prior runs of this pipeline worked just fine (less cached then too).

Has anyone seen this kind of "disappearing / zombie task" state before? It's very perplexing to me.

  
  
Posted one month ago
Votes Newest

Answers 43


Hi @<1689446563463565312:profile|SmallTurkey79> , when this happens, do you see anything in the API server logs? How is the agent running, on top of K8s or bare metal? Docker mode or venv?

  
  
Posted one month ago

worker thinks its in venv mode but is containerized .
apiserver is docker compose stack

ill check logs next time i see it .

currently rushing to ship a model out, so I've just been running smaller experiments slowly hoping to avoid the situation . fingers crossed .

  
  
Posted one month ago

trying to run the experiment that kept failing right now, watching logs (they go by fast)... will try to spot anything anamolous

  
  
Posted one month ago

nothing came up in the logs. all 200's

  
  
Posted one month ago

it happens consistently with this one task that really should be all cache.
I disabled cache in the final step and it seems to run now.

  
  
Posted one month ago

damn, it just happened again... "queued" steps in the viz are actually complete. the pipeline task disappeared again without completion, logs mid-stream.

  
  
Posted one month ago

Hi @<1689446563463565312:profile|SmallTurkey79> !
Prior runs of this pipeline worked just fine What SDK version were you using for the prior runs? Does this still happen if you revert to that version?
Can you provide a script that imitates what you are doing?
In the pipeline you are running, are you creating new tasks/pipelines/datasets?

  
  
Posted one month ago

yeah this problem seems to happen on 1.15.1 and 1.16.2 as well, prior runs were on the same version even. It just feels like it happens absolutely randomly (but often).
just happened again to me.

The pipeline is constructed from tasks, it basically does map/reduce. prepare data -> model training + evaluation -> backtesting performance summary.

It figures out how wide to go by parsing the date range supplied as input parameter. Been running stuff like this for months but only recently did things just start... vanishing like this.

Would appreciate any help. Really need this to be more robust to make the case for company-wide adoption.
image

  
  
Posted one month ago

I really can't provide a script that matches exactly (though I do plan to publish something like this soon enough), but here's one that's quite close / similar in style:
None where I tried function-steps out instead, but it's a similar architecture for the pipeline (the point of the example was to show how to do a dynamic pipeline)

  
  
Posted one month ago

the workers connect to the clearml server via ssh-tunnels, so they all talk to "localhost" despite being deployed in different places. each task creates artifacts and metrics that are used downstream

  
  
Posted one month ago

Hi @<1689446563463565312:profile|SmallTurkey79> ! I will take a look at this and try to replicate the issue. In the meantime, I suggest you look into other dependencies you are using. Maybe some dependency got upgraded and the upgrade now triggers this behaviour in clearml.

  
  
Posted one month ago

would it be on the pipeline task itself then, since that's what's disappearing?
I will do some experiment comparisons and see if there are package diffs. thanks for the tip.

  
  
Posted one month ago

None here's how I'm establishing worker-server (and client-server) comms fwiw

  
  
Posted one month ago

would it be on the pipeline task itself then, since that's what's disappearing? that likely the case

  
  
Posted one month ago

ugh. again. it launched all these tasks and then just died. logs go silent.
image
image
image

  
  
Posted one month ago

can you share the logs of the controller?

  
  
Posted one month ago

that's the final screenshot. it just shows a bunch of normal "launching ..." steps, and then stops all the sudden.

  
  
Posted one month ago

do you have any STATUS REASON under the INFO section of the controller task?

  
  
Posted one month ago

N/A (still shows as running despite Abort being sent)

  
  
Posted one month ago

image

  
  
Posted one month ago

clearml-server-1.15.1, clearml-1.16.2
  
  
Posted one month ago

odd bc I thought I was controlling this... maybe I'm wrong and the env is mis-set.
image
image

  
  
Posted one month ago

are you running this locally or are you enqueueing the task (controller)?

  
  
Posted one month ago

enqueuing. pipe.start("default") but I think it's picking up on my local clearml install instead of what I told it to use.

my tasks have this in them... what's the equivalent for pipeline controllers?
image

  
  
Posted one month ago

(the "magic" of the env detection is nice but man... it has its surprises)

  
  
Posted one month ago

let me downgrade my install of clearml and try again.

  
  
Posted one month ago

default queue is served with (containerized + custom entrypoint) venv workers (agent services just wasn't working great for me, gave up)

  
  
Posted one month ago

hoping this really is a 1.16.2 issue. fingers crossed. at this point more pipes are failing than not.

  
  
Posted one month ago

damn. I can't believe it. It disappeared again despite having 1.15.1 be the task's clearml version.
I'm going to try running the pipeline locally.

  
  
Posted one month ago

yeah locally it did run. I then ran another via UI spawned from the successful one, it showed cached steps and then refused to run the bottom one, disappearing again. No status message, no status reason. (not running... actually dead)
image

  
  
Posted one month ago
2K Views
43 Answers
one month ago
one month ago
Tags