do you have the agent logs that is supposed to run your pipeline? Maybe there is a clue there. I would also suggest to try enqueuing the pipeline to some other queue, maybe even run the agent on your on machine if you do not already and see what happens
None here's how I'm establishing worker-server (and client-server) comms fwiw
worker thinks its in venv mode but is containerized .
apiserver is docker compose stack
ill check logs next time i see it .
currently rushing to ship a model out, so I've just been running smaller experiments slowly hoping to avoid the situation . fingers crossed .
when i run the pipe locally, im using the same connect.sh script as the workers are in order to poll the apiserver via the ssh tunnel.
Hi @<1689446563463565312:profile|SmallTurkey79> !Prior runs of this pipeline worked just fine
What SDK version were you using for the prior runs? Does this still happen if you revert to that version?
Can you provide a script that imitates what you are doing?
In the pipeline you are running, are you creating new tasks/pipelines/datasets?
that's the final screenshot. it just shows a bunch of normal "launching ..." steps, and then stops all the sudden.
did you take a look at my connect.sh
script? I dont think it's a problem since only the controller task is the problem.
Is there some sort of culling procedure that kills tasks by any chance? the lack of logs makes me think it's something like that.
I can also try different agent versions.
I have tried other queues, they're all running the same container.
so far the only thing reliable is pipe.start_locally()
I think i've narrowed this down to the ssh connection approach.
regarding the container that runs the pipeline:
- when I made it stop using autossh tunnels and instead put it on the same machine as the clearml server + used docker network host mode, suddenly the problematic pipeline started completing.
it's just so odd that the pipeline controller task is the only one with an issue. the modeling / data-creation tasks really all seem to complete consistently just fine.
so yeah, best guess now is that its unrelated to clearml verison but rather to the connectivity of the pipeline controller task to the api server.
when I run this pipeline controller locally (also using the same ssh tunnel approach for comms), the pipeline completes just fine. so it's something specific about how its working inside the container vs on my machine, it seems.
ugh. again. it launched all these tasks and then just died. logs go silent.
are you running this locally or are you enqueueing the task (controller)?
it happens consistently with this one task that really should be all cache.
I disabled cache in the final step and it seems to run now.