when i run the pipe locally, im using the same connect.sh script as the workers are in order to poll the apiserver via the ssh tunnel.
yeah, it just shows what I see in the Console, but then immediately goes back to polling for more work (so... instead of running backtest, it exits, no completion message)
damn. I can't believe it. It disappeared again despite having 1.15.1 be the task's clearml version.
I'm going to try running the pipeline locally.
do you have any STATUS REASON
under the INFO
section of the controller task?
trying to run the experiment that kept failing right now, watching logs (they go by fast)... will try to spot anything anamolous
(the "magic" of the env detection is nice but man... it has its surprises)
did you take a look at my connect.sh
script? I dont think it's a problem since only the controller task is the problem.
Is there some sort of culling procedure that kills tasks by any chance? the lack of logs makes me think it's something like that.
I can also try different agent versions.
it happens consistently with this one task that really should be all cache.
I disabled cache in the final step and it seems to run now.
N/A (still shows as running despite Abort being sent)
let me downgrade my install of clearml and try again.
ugh. again. it launched all these tasks and then just died. logs go silent.
enqueuing. pipe.start("default")
but I think it's picking up on my local clearml install instead of what I told it to use.
my tasks have this in them... what's the equivalent for pipeline controllers?
I really can't provide a script that matches exactly (though I do plan to publish something like this soon enough), but here's one that's quite close / similar in style:
None where I tried function-steps out instead, but it's a similar architecture for the pipeline (the point of the example was to show how to do a dynamic pipeline)
do you have the agent logs that is supposed to run your pipeline? Maybe there is a clue there. I would also suggest to try enqueuing the pipeline to some other queue, maybe even run the agent on your on machine if you do not already and see what happens
but maybe here's a clue. after hanging like that for a while... it seems like the agent restarts (the container it runs in does not)
it's pretty reliably happening but the logs are just not informative. just stops midway
default queue is served with (containerized + custom entrypoint) venv workers (agent services just wasn't working great for me, gave up)
hoping this really is a 1.16.2 issue. fingers crossed. at this point more pipes are failing than not.
damn, it just happened again... "queued" steps in the viz are actually complete. the pipeline task disappeared again without completion, logs mid-stream.
worker thinks its in venv mode but is containerized .
apiserver is docker compose stack
ill check logs next time i see it .
currently rushing to ship a model out, so I've just been running smaller experiments slowly hoping to avoid the situation . fingers crossed .
the workers connect to the clearml server via ssh-tunnels, so they all talk to "localhost" despite being deployed in different places. each task creates artifacts and metrics that are used downstream
Hi @<1689446563463565312:profile|SmallTurkey79> , when this happens, do you see anything in the API server logs? How is the agent running, on top of K8s or bare metal? Docker mode or venv?
None here's how I'm establishing worker-server (and client-server) comms fwiw
Hi @<1689446563463565312:profile|SmallTurkey79> ! I will take a look at this and try to replicate the issue. In the meantime, I suggest you look into other dependencies you are using. Maybe some dependency got upgraded and the upgrade now triggers this behaviour in clearml.