hoping this really is a 1.16.2 issue. fingers crossed. at this point more pipes are failing than not.
(the "magic" of the env detection is nice but man... it has its surprises)
the workers connect to the clearml server via ssh-tunnels, so they all talk to "localhost" despite being deployed in different places. each task creates artifacts and metrics that are used downstream
I think i've narrowed this down to the ssh connection approach.
regarding the container that runs the pipeline:
- when I made it stop using autossh tunnels and instead put it on the same machine as the clearml server + used docker network host mode, suddenly the problematic pipeline started completing.
it's just so odd that the pipeline controller task is the only one with an issue. the modeling / data-creation tasks really all seem to complete consistently just fine.
so yeah, best guess now is that its unrelated to clearml verison but rather to the connectivity of the pipeline controller task to the api server.
when I run this pipeline controller locally (also using the same ssh tunnel approach for comms), the pipeline completes just fine. so it's something specific about how its working inside the container vs on my machine, it seems.
but maybe here's a clue. after hanging like that for a while... it seems like the agent restarts (the container it runs in does not)
None here's how I'm establishing worker-server (and client-server) comms fwiw
Hi @<1689446563463565312:profile|SmallTurkey79> , when this happens, do you see anything in the API server logs? How is the agent running, on top of K8s or bare metal? Docker mode or venv?
worker thinks its in venv mode but is containerized .
apiserver is docker compose stack
ill check logs next time i see it .
currently rushing to ship a model out, so I've just been running smaller experiments slowly hoping to avoid the situation . fingers crossed .
that's the final screenshot. it just shows a bunch of normal "launching ..." steps, and then stops all the sudden.
it happens consistently with this one task that really should be all cache.
I disabled cache in the final step and it seems to run now.
would it be on the pipeline task itself then, since that's what's disappearing?
that likely the case
it's pretty reliably happening but the logs are just not informative. just stops midway
Hi @<1689446563463565312:profile|SmallTurkey79> !Prior runs of this pipeline worked just fine
What SDK version were you using for the prior runs? Does this still happen if you revert to that version?
Can you provide a script that imitates what you are doing?
In the pipeline you are running, are you creating new tasks/pipelines/datasets?
I really can't provide a script that matches exactly (though I do plan to publish something like this soon enough), but here's one that's quite close / similar in style:
None where I tried function-steps out instead, but it's a similar architecture for the pipeline (the point of the example was to show how to do a dynamic pipeline)
do you have the agent logs that is supposed to run your pipeline? Maybe there is a clue there. I would also suggest to try enqueuing the pipeline to some other queue, maybe even run the agent on your on machine if you do not already and see what happens
ugh. again. it launched all these tasks and then just died. logs go silent.
would it be on the pipeline task itself then, since that's what's disappearing?
I will do some experiment comparisons and see if there are package diffs. thanks for the tip.
enqueuing. pipe.start("default")
but I think it's picking up on my local clearml install instead of what I told it to use.
my tasks have this in them... what's the equivalent for pipeline controllers?
ah. a clue! it came right below that but i guess out of order...
that id
is the pipeline that failed
its odd... I really dont see tasks except the controller one dying
default queue is served with (containerized + custom entrypoint) venv workers (agent services just wasn't working great for me, gave up)
yeah this problem seems to happen on 1.15.1 and 1.16.2 as well, prior runs were on the same version even. It just feels like it happens absolutely randomly (but often).
just happened again to me.
The pipeline is constructed from tasks, it basically does map/reduce. prepare data -> model training + evaluation -> backtesting performance summary.
It figures out how wide to go by parsing the date range supplied as input parameter. Been running stuff like this for months but only recently did things just start... vanishing like this.
Would appreciate any help. Really need this to be more robust to make the case for company-wide adoption.
let me downgrade my install of clearml and try again.
I have tried other queues, they're all running the same container.
so far the only thing reliable is pipe.start_locally()