Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
I Dont Exactly Know How To Ask For Help On This... Nor Have A Reproducible Minimal Example... I Downgraded Back To 1.15.1 From 1.16.2 And Have The Same Issue There. I Have A Pipeline That'S Repeatedly Failing To Complete. It Correctly Marks Things As Cach

i dont exactly know how to ask for help on this... nor have a reproducible minimal example...
I downgraded back to 1.15.1 from 1.16.2 and have the same issue there.
I have a pipeline that's repeatedly failing to complete. it correctly marks things as cached, and then just doesnt execute the last step. The task stays "Running" forever, but disappears - the worker just has a process that dies. CPU/RAM aren't running out or anything like that at all. Prior runs of this pipeline worked just fine (less cached then too).

Has anyone seen this kind of "disappearing / zombie task" state before? It's very perplexing to me.

  
  
Posted one month ago
Votes Newest

Answers 43


when i run the pipe locally, im using the same connect.sh script as the workers are in order to poll the apiserver via the ssh tunnel.

  
  
Posted one month ago

yeah, it just shows what I see in the Console, but then immediately goes back to polling for more work (so... instead of running backtest, it exits, no completion message)

  
  
Posted one month ago

damn. I can't believe it. It disappeared again despite having 1.15.1 be the task's clearml version.
I'm going to try running the pipeline locally.

  
  
Posted one month ago

do you have any STATUS REASON under the INFO section of the controller task?

  
  
Posted one month ago

trying to run the experiment that kept failing right now, watching logs (they go by fast)... will try to spot anything anamolous

  
  
Posted one month ago

(the "magic" of the env detection is nice but man... it has its surprises)

  
  
Posted one month ago

can you share the logs of the controller?

  
  
Posted one month ago

I will ask internally about this

  
  
Posted one month ago

did you take a look at my connect.sh script? I dont think it's a problem since only the controller task is the problem.

Is there some sort of culling procedure that kills tasks by any chance? the lack of logs makes me think it's something like that.

I can also try different agent versions.

  
  
Posted one month ago

it happens consistently with this one task that really should be all cache.
I disabled cache in the final step and it seems to run now.

  
  
Posted one month ago

N/A (still shows as running despite Abort being sent)

  
  
Posted one month ago

let me downgrade my install of clearml and try again.

  
  
Posted one month ago

ugh. again. it launched all these tasks and then just died. logs go silent.
image
image
image

  
  
Posted one month ago

enqueuing. pipe.start("default") but I think it's picking up on my local clearml install instead of what I told it to use.

my tasks have this in them... what's the equivalent for pipeline controllers?
image

  
  
Posted one month ago

image

  
  
Posted one month ago

I really can't provide a script that matches exactly (though I do plan to publish something like this soon enough), but here's one that's quite close / similar in style:
None where I tried function-steps out instead, but it's a similar architecture for the pipeline (the point of the example was to show how to do a dynamic pipeline)

  
  
Posted one month ago

do you have the agent logs that is supposed to run your pipeline? Maybe there is a clue there. I would also suggest to try enqueuing the pipeline to some other queue, maybe even run the agent on your on machine if you do not already and see what happens

  
  
Posted one month ago

but maybe here's a clue. after hanging like that for a while... it seems like the agent restarts (the container it runs in does not)
image

  
  
Posted one month ago

it's pretty reliably happening but the logs are just not informative. just stops midway
image

  
  
Posted one month ago

default queue is served with (containerized + custom entrypoint) venv workers (agent services just wasn't working great for me, gave up)

  
  
Posted one month ago

hoping this really is a 1.16.2 issue. fingers crossed. at this point more pipes are failing than not.

  
  
Posted one month ago

damn, it just happened again... "queued" steps in the viz are actually complete. the pipeline task disappeared again without completion, logs mid-stream.

  
  
Posted one month ago

worker thinks its in venv mode but is containerized .
apiserver is docker compose stack

ill check logs next time i see it .

currently rushing to ship a model out, so I've just been running smaller experiments slowly hoping to avoid the situation . fingers crossed .

  
  
Posted one month ago

the workers connect to the clearml server via ssh-tunnels, so they all talk to "localhost" despite being deployed in different places. each task creates artifacts and metrics that are used downstream

  
  
Posted one month ago

Hi @<1689446563463565312:profile|SmallTurkey79> , when this happens, do you see anything in the API server logs? How is the agent running, on top of K8s or bare metal? Docker mode or venv?

  
  
Posted one month ago

None here's how I'm establishing worker-server (and client-server) comms fwiw

  
  
Posted one month ago

clearml-server-1.15.1, clearml-1.16.2
  
  
Posted one month ago

Hi @<1689446563463565312:profile|SmallTurkey79> ! I will take a look at this and try to replicate the issue. In the meantime, I suggest you look into other dependencies you are using. Maybe some dependency got upgraded and the upgrade now triggers this behaviour in clearml.

  
  
Posted one month ago

clearml_agent v1.8.1
  
  
Posted one month ago

thank you

  
  
Posted one month ago
2K Views
43 Answers
one month ago
one month ago
Tags