do you have any  STATUS REASON  under the  INFO  section of the controller task?
it's pretty reliably happening but the logs are just not informative. just stops midway
damn. I can't believe it. It disappeared again despite having 1.15.1 be the task's clearml version.
I'm going to try running the pipeline locally.
I really can't provide a script that matches exactly (though I do plan to publish something like this soon enough), but here's one that's quite close / similar in style:
None  where I tried function-steps out instead, but it's a similar architecture for the pipeline (the point of the example was to show how to do a dynamic pipeline)
do you have the agent logs that is supposed to run your pipeline? Maybe there is a clue there. I would also suggest to try enqueuing the pipeline to some other queue, maybe even run the agent on your on machine if you do not already and see what happens
let me downgrade my install of clearml and try again.
None here's how I'm establishing worker-server (and client-server) comms fwiw
Hi @<1689446563463565312:profile|SmallTurkey79> , when this happens, do you see anything in the API server logs? How is the agent running, on top of K8s or bare metal? Docker mode or venv?
yeah locally it did run. I then ran another via UI spawned from the successful one, it showed cached steps and then refused to run the bottom one, disappearing again. No status message, no status reason. (not running... actually dead)
the workers connect to the clearml server via ssh-tunnels, so they all talk to "localhost" despite being deployed in different places. each task creates artifacts and metrics that are used downstream
N/A (still shows as running despite Abort being sent)
ugh. again. it launched all these tasks and then just died. logs go silent.


its odd... I really dont see tasks except the controller one dying
Hi @<1689446563463565312:profile|SmallTurkey79> ! I will take a look at this and try to replicate the issue. In the meantime, I suggest you look into other dependencies you are using. Maybe some dependency got upgraded and the upgrade now triggers this behaviour in clearml.
worker thinks its in venv mode but is containerized .
apiserver is docker compose stack
ill check logs next time i see it .
currently rushing to ship a model out, so I've just been running smaller experiments slowly hoping to avoid the situation . fingers crossed .
when i run the pipe locally, im using the same connect.sh script as the workers are in order to poll the apiserver via the ssh tunnel.
it happens consistently with this one task that really should be all cache.
I disabled cache in the final step and it seems to run now.
Hi  @<1689446563463565312:profile|SmallTurkey79> !Prior runs of this pipeline worked just fine  What SDK version were you using for the prior runs? Does this still happen if you revert to that version?
Can you provide a script that imitates what you are doing?
In the pipeline you are running, are you creating new tasks/pipelines/datasets?
yeah, it just shows what I see in the Console, but then immediately goes back to polling for more work (so... instead of running backtest, it exits, no completion message)
hoping this really is a 1.16.2 issue. fingers crossed. at this point more pipes are failing than not.
are you running this locally or are you enqueueing the task (controller)?
yeah this problem seems to happen on 1.15.1 and 1.16.2 as well, prior runs were  on the same version  even. It just feels like it happens absolutely randomly (but often).
just happened again to me.
The pipeline is constructed from tasks, it basically does map/reduce. prepare data -> model training + evaluation -> backtesting performance summary.
It figures out how wide to go by parsing the date range supplied as input parameter. Been running stuff like this for months but only recently did things just start... vanishing like this.
Would appreciate any help. Really need this to be more robust to make the case for company-wide adoption.
that's the final screenshot. it just shows a bunch of normal "launching ..." steps, and then stops all the sudden.
(the "magic" of the env detection is nice but man... it has its surprises)
enqueuing.  pipe.start("default")  but I think it's picking up on my local clearml install instead of what I told it to use.
my tasks have this in them... what's the equivalent for pipeline controllers?
did you take a look at my  connect.sh  script? I dont think it's a problem since only the controller task is the problem.
Is there some sort of culling procedure that kills tasks by any chance? the lack of logs makes me think it's something like that.
I can also try different agent versions.
