did you take a look at my connect.sh
script? I dont think it's a problem since only the controller task is the problem.
Is there some sort of culling procedure that kills tasks by any chance? the lack of logs makes me think it's something like that.
I can also try different agent versions.
N/A (still shows as running despite Abort being sent)
(the "magic" of the env detection is nice but man... it has its surprises)
yeah locally it did run. I then ran another via UI spawned from the successful one, it showed cached steps and then refused to run the bottom one, disappearing again. No status message, no status reason. (not running... actually dead)
it's pretty reliably happening but the logs are just not informative. just stops midway
I have tried other queues, they're all running the same container.
so far the only thing reliable is pipe.start_locally()
ugh. again. it launched all these tasks and then just died. logs go silent.
enqueuing. pipe.start("default")
but I think it's picking up on my local clearml install instead of what I told it to use.
my tasks have this in them... what's the equivalent for pipeline controllers?
Hi @<1689446563463565312:profile|SmallTurkey79> ! I will take a look at this and try to replicate the issue. In the meantime, I suggest you look into other dependencies you are using. Maybe some dependency got upgraded and the upgrade now triggers this behaviour in clearml.
odd bc I thought I was controlling this... maybe I'm wrong and the env is mis-set.
yeah, it just shows what I see in the Console, but then immediately goes back to polling for more work (so... instead of running backtest, it exits, no completion message)
I really can't provide a script that matches exactly (though I do plan to publish something like this soon enough), but here's one that's quite close / similar in style:
None where I tried function-steps out instead, but it's a similar architecture for the pipeline (the point of the example was to show how to do a dynamic pipeline)
damn. I can't believe it. It disappeared again despite having 1.15.1 be the task's clearml version.
I'm going to try running the pipeline locally.
when i run the pipe locally, im using the same connect.sh script as the workers are in order to poll the apiserver via the ssh tunnel.
it happens consistently with this one task that really should be all cache.
I disabled cache in the final step and it seems to run now.
are you running this locally or are you enqueueing the task (controller)?
Hi @<1689446563463565312:profile|SmallTurkey79> , when this happens, do you see anything in the API server logs? How is the agent running, on top of K8s or bare metal? Docker mode or venv?
damn, it just happened again... "queued" steps in the viz are actually complete. the pipeline task disappeared again without completion, logs mid-stream.
trying to run the experiment that kept failing right now, watching logs (they go by fast)... will try to spot anything anamolous
do you have any STATUS REASON
under the INFO
section of the controller task?
but maybe here's a clue. after hanging like that for a while... it seems like the agent restarts (the container it runs in does not)
that's the final screenshot. it just shows a bunch of normal "launching ..." steps, and then stops all the sudden.
I think i've narrowed this down to the ssh connection approach.
regarding the container that runs the pipeline:
- when I made it stop using autossh tunnels and instead put it on the same machine as the clearml server + used docker network host mode, suddenly the problematic pipeline started completing.
it's just so odd that the pipeline controller task is the only one with an issue. the modeling / data-creation tasks really all seem to complete consistently just fine.
so yeah, best guess now is that its unrelated to clearml verison but rather to the connectivity of the pipeline controller task to the api server.
when I run this pipeline controller locally (also using the same ssh tunnel approach for comms), the pipeline completes just fine. so it's something specific about how its working inside the container vs on my machine, it seems.
ah. a clue! it came right below that but i guess out of order...
that id
is the pipeline that failed
its odd... I really dont see tasks except the controller one dying