worker thinks its in venv mode but is containerized .
apiserver is docker compose stack
ill check logs next time i see it .
currently rushing to ship a model out, so I've just been running smaller experiments slowly hoping to avoid the situation . fingers crossed .
yeah, it just shows what I see in the Console, but then immediately goes back to polling for more work (so... instead of running backtest, it exits, no completion message)
damn. I can't believe it. It disappeared again despite having 1.15.1 be the task's clearml version.
I'm going to try running the pipeline locally.
but maybe here's a clue. after hanging like that for a while... it seems like the agent restarts (the container it runs in does not)
Hi SmallTurkey79 , when this happens, do you see anything in the API server logs? How is the agent running, on top of K8s or bare metal? Docker mode or venv?
are you running this locally or are you enqueueing the task (controller)?
do you have any STATUS REASON
under the INFO
section of the controller task?
I think i've narrowed this down to the ssh connection approach.
regarding the container that runs the pipeline:
- when I made it stop using autossh tunnels and instead put it on the same machine as the clearml server + used docker network host mode, suddenly the problematic pipeline started completing.
it's just so odd that the pipeline controller task is the only one with an issue. the modeling / data-creation tasks really all seem to complete consistently just fine.
so yeah, best guess now is that its unrelated to clearml verison but rather to the connectivity of the pipeline controller task to the api server.
when I run this pipeline controller locally (also using the same ssh tunnel approach for comms), the pipeline completes just fine. so it's something specific about how its working inside the container vs on my machine, it seems.
Hi SmallTurkey79 ! I will take a look at this and try to replicate the issue. In the meantime, I suggest you look into other dependencies you are using. Maybe some dependency got upgraded and the upgrade now triggers this behaviour in clearml.
ah. a clue! it came right below that but i guess out of order...
that id
is the pipeline that failed
damn, it just happened again... "queued" steps in the viz are actually complete. the pipeline task disappeared again without completion, logs mid-stream.
the workers connect to the clearml server via ssh-tunnels, so they all talk to "localhost" despite being deployed in different places. each task creates artifacts and metrics that are used downstream