Reputation
Badges 1
38 × Eureka!I will try to fix that. But what is the purpose of the 'k8s_scheduler' queue?
No problem SmugDolphin23 and thank you. I am really quite stuck with this 😄
Thank you for the reply SmugDolphin23
Is there any possible workaround at the moment?
So it seems it starts on the queue I specify and then it gets moved to the k8s_scheduler queue.
So the experiment starts with the status "Running" and then once moved to the k8s_scheduler queue it stays in "Pending"
SuccessfulKoala55 So this is the intended behavior? To always have to select the queue from "Advanced configuration" on the pipeline run window even though the "set_default_execution_queue" is set to the "default" queue?
Besides the fact that tasks will always have "k8s_scheduler" as the queue in the info tab so looking back at a task you will not be able to tell to which queue it was assigned.
ok, i'll try to fix the connection issue. Thank you for the help 🙂
yes that is possible but I do use istio for the clearml server components. I can move the agents to a separate namespace. I will try that
Hello CostlyOstrich36 I solved it by using a .sh script locally when I want to create/update the trigger. The sh script will chain 2 py scripts together. The first py script will take care of deleting the existing running trigger task and the second py script will be the one that will recreate the trigger task with the updated code.
It just seems strange to me that you could have 2 triggers that do different things but using the same name. Nothing that can't be worked around but for automa...
actually it does not because the pods logs show .
TimelyMouse69 The pipeline task(s) end up in a sub project called ".pipelines" no matter how I configure the PipelineController project name and target project. This .pipelines project is not visible from the "PROJECTS" section of the UI. You can only get to it from the PIPELINES view by clicking on "Full details" on a step.
Please see attached images
Ah I did not think to look for that option in the user's settings. That should do it. Thank you for the help 🙂
What I would like to be able to do is basically get rid of the ".pipelines" project that gets created automatically
But the pre_execute_callback
from the pipe.add_function_step
needs to be fixed, it does run before the task is executed but the Node does not have any attributes set besides the name.
Hi WackyRabbit7 . Take a look at https://clear.ml/docs/latest/docs/references/sdk/task#taskget_task
I believe it describes your use case as example.
Hi SmugDolphin23 . I have tried to access node.job with a pre_execute_callback but the node object does not have the job attribute set as you can see above.
For a bit more context. Let's say I have 2 experiments in "Project MLOps" called "Exp 1" and "Exp 2". When I publish "Exp 2" I want this trigger to pick up that event and start another task in some other project. But this task would need some information about "Exp 2" like it's name, id or maybe config object etc.
Does the trigger pass any context to the task which will be executed?
If I right click on the initial pipeline Draft and hit "Run" from there, the new run wizard is populated with the default parameters value and uses "set_default_execution_queue" as the queue under "Advanced configuration".
This is what I tried and it does not work because plot
is no longer a data frame object, it is now a styler
. The error comes from the fact that logger.report_table
wants do to fillna
on the data frame object. I can't seem to find a way to have the hyperlinks embedded on the data frame object. Any suggestions?
Here is what I see as the ideal scenario:
If a worker pod running a task dies for any reason, clearml should mark the task as failed / aborted asap. Basically improve the feedback loop. Tasks running as services should be re-enqueued automatically if a the pod it runs on dies because of OOM, node eviction, node replacement, pod replacement because of autoscaling etc. You could argue the same for tasks which are not services. Restart them if their pod dies for the above reasons.
I am trying to run with scale from zero k8s nodes for maximum cost savings. So a node should only be online if clearml actually runs a task. Waiting for the 2 hours timeout when running on expensive gpu instances for example is quite wasteful because the pipeline controller pod will keep the node online.
Not sure, I have not tried it myself. Give it a go and see how it behaves.
Alright. I will keep it in mind. Thank you for the confirmation 🙂
That would match what add_dataset_trigger
and add_model_trigger
already have so it would be good
Now for example the pod was killed because I had to replace the node. The task is stuck in "Running". Aborting from the UI says "experiment aborted successfully" but the state does not change.