did you take a look at my connect.sh
script? I dont think it's a problem since only the controller task is the problem.
Is there some sort of culling procedure that kills tasks by any chance? the lack of logs makes me think it's something like that.
I can also try different agent versions.
I have tried other queues, they're all running the same container.
so far the only thing reliable is pipe.start_locally()
I think i've narrowed this down to the ssh connection approach.
regarding the container that runs the pipeline:
- when I made it stop using autossh tunnels and instead put it on the same machine as the clearml server + used docker network host mode, suddenly the problematic pipeline started completing.
it's just so odd that the pipeline controller task is the only one with an issue. the modeling / data-creation tasks really all seem to complete consistently just fine.
so yeah, best guess now is that its unrelated to clearml verison but rather to the connectivity of the pipeline controller task to the api server.
when I run this pipeline controller locally (also using the same ssh tunnel approach for comms), the pipeline completes just fine. so it's something specific about how its working inside the container vs on my machine, it seems.
ugh. again. it launched all these tasks and then just died. logs go silent.
do you have any STATUS REASON
under the INFO
section of the controller task?
Hi @<1689446563463565312:profile|SmallTurkey79> , when this happens, do you see anything in the API server logs? How is the agent running, on top of K8s or bare metal? Docker mode or venv?
ah. a clue! it came right below that but i guess out of order...
that id
is the pipeline that failed
(the "magic" of the env detection is nice but man... it has its surprises)
Hi @<1689446563463565312:profile|SmallTurkey79> ! I will take a look at this and try to replicate the issue. In the meantime, I suggest you look into other dependencies you are using. Maybe some dependency got upgraded and the upgrade now triggers this behaviour in clearml.
yeah this problem seems to happen on 1.15.1 and 1.16.2 as well, prior runs were on the same version even. It just feels like it happens absolutely randomly (but often).
just happened again to me.
The pipeline is constructed from tasks, it basically does map/reduce. prepare data -> model training + evaluation -> backtesting performance summary.
It figures out how wide to go by parsing the date range supplied as input parameter. Been running stuff like this for months but only recently did things just start... vanishing like this.
Would appreciate any help. Really need this to be more robust to make the case for company-wide adoption.