Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
I Dont Exactly Know How To Ask For Help On This... Nor Have A Reproducible Minimal Example... I Downgraded Back To 1.15.1 From 1.16.2 And Have The Same Issue There. I Have A Pipeline That'S Repeatedly Failing To Complete. It Correctly Marks Things As Cach

i dont exactly know how to ask for help on this... nor have a reproducible minimal example...
I downgraded back to 1.15.1 from 1.16.2 and have the same issue there.
I have a pipeline that's repeatedly failing to complete. it correctly marks things as cached, and then just doesnt execute the last step. The task stays "Running" forever, but disappears - the worker just has a process that dies. CPU/RAM aren't running out or anything like that at all. Prior runs of this pipeline worked just fine (less cached then too).

Has anyone seen this kind of "disappearing / zombie task" state before? It's very perplexing to me.

  
  
Posted 5 months ago
Votes Newest

Answers 43


clearml-server-1.15.1, clearml-1.16.2
  
  
Posted 5 months ago

odd bc I thought I was controlling this... maybe I'm wrong and the env is mis-set.
image
image

  
  
Posted 5 months ago

when i run the pipe locally, im using the same connect.sh script as the workers are in order to poll the apiserver via the ssh tunnel.

  
  
Posted 5 months ago

yeah locally it did run. I then ran another via UI spawned from the successful one, it showed cached steps and then refused to run the bottom one, disappearing again. No status message, no status reason. (not running... actually dead)
image

  
  
Posted 5 months ago

default queue is served with (containerized + custom entrypoint) venv workers (agent services just wasn't working great for me, gave up)

  
  
Posted 5 months ago

image

  
  
Posted 5 months ago

hoping this really is a 1.16.2 issue. fingers crossed. at this point more pipes are failing than not.

  
  
Posted 5 months ago

(the "magic" of the env detection is nice but man... it has its surprises)

  
  
Posted 5 months ago

that's the final screenshot. it just shows a bunch of normal "launching ..." steps, and then stops all the sudden.

  
  
Posted 5 months ago

trying to run the experiment that kept failing right now, watching logs (they go by fast)... will try to spot anything anamolous

  
  
Posted 5 months ago

damn. I can't believe it. It disappeared again despite having 1.15.1 be the task's clearml version.
I'm going to try running the pipeline locally.

  
  
Posted 5 months ago

Hi @<1689446563463565312:profile|SmallTurkey79> , when this happens, do you see anything in the API server logs? How is the agent running, on top of K8s or bare metal? Docker mode or venv?

  
  
Posted 5 months ago

damn, it just happened again... "queued" steps in the viz are actually complete. the pipeline task disappeared again without completion, logs mid-stream.

  
  
Posted 5 months ago

nothing came up in the logs. all 200's

  
  
Posted 5 months ago

worker thinks its in venv mode but is containerized .
apiserver is docker compose stack

ill check logs next time i see it .

currently rushing to ship a model out, so I've just been running smaller experiments slowly hoping to avoid the situation . fingers crossed .

  
  
Posted 5 months ago

it happens consistently with this one task that really should be all cache.
I disabled cache in the final step and it seems to run now.

  
  
Posted 5 months ago

but maybe here's a clue. after hanging like that for a while... it seems like the agent restarts (the container it runs in does not)
image

  
  
Posted 5 months ago

it's pretty reliably happening but the logs are just not informative. just stops midway
image

  
  
Posted 5 months ago

I think i've narrowed this down to the ssh connection approach.

regarding the container that runs the pipeline:

  • when I made it stop using autossh tunnels and instead put it on the same machine as the clearml server + used docker network host mode, suddenly the problematic pipeline started completing.
    it's just so odd that the pipeline controller task is the only one with an issue. the modeling / data-creation tasks really all seem to complete consistently just fine.

so yeah, best guess now is that its unrelated to clearml verison but rather to the connectivity of the pipeline controller task to the api server.

when I run this pipeline controller locally (also using the same ssh tunnel approach for comms), the pipeline completes just fine. so it's something specific about how its working inside the container vs on my machine, it seems.

  
  
Posted 5 months ago

Hi @<1689446563463565312:profile|SmallTurkey79> ! I will take a look at this and try to replicate the issue. In the meantime, I suggest you look into other dependencies you are using. Maybe some dependency got upgraded and the upgrade now triggers this behaviour in clearml.

  
  
Posted 5 months ago

let me downgrade my install of clearml and try again.

  
  
Posted 5 months ago

ugh. again. it launched all these tasks and then just died. logs go silent.
image
image
image

  
  
Posted 5 months ago

yeah this problem seems to happen on 1.15.1 and 1.16.2 as well, prior runs were on the same version even. It just feels like it happens absolutely randomly (but often).
just happened again to me.

The pipeline is constructed from tasks, it basically does map/reduce. prepare data -> model training + evaluation -> backtesting performance summary.

It figures out how wide to go by parsing the date range supplied as input parameter. Been running stuff like this for months but only recently did things just start... vanishing like this.

Would appreciate any help. Really need this to be more robust to make the case for company-wide adoption.
image

  
  
Posted 5 months ago

are you running this locally or are you enqueueing the task (controller)?

  
  
Posted 5 months ago

ah. a clue! it came right below that but i guess out of order...
that id is the pipeline that failed
image

  
  
Posted 5 months ago

its odd... I really dont see tasks except the controller one dying

  
  
Posted 5 months ago

enqueuing. pipe.start("default") but I think it's picking up on my local clearml install instead of what I told it to use.

my tasks have this in them... what's the equivalent for pipeline controllers?
image

  
  
Posted 5 months ago

N/A (still shows as running despite Abort being sent)

  
  
Posted 5 months ago

I have tried other queues, they're all running the same container.
so far the only thing reliable is pipe.start_locally()

  
  
Posted 5 months ago

thank you

  
  
Posted 5 months ago
21K Views
43 Answers
5 months ago
5 months ago
Tags