I have an issue with the queues. I am running ClearML server + agents in kubernetes. Because of that there is a default internal queue preconfigured called "k8s_scheduler".
I have defined another queue called "default" where I enqueue the tasks. As far as I understand the k8s glue pulls the task from the original queue (in this case "default") and places it in the "k8s_scheduler" queue from where the spawned worker pod will consume the task.
The problem is all my tasks have the "k8s_scheduler" queue set in the info tab (see screenshot). It makes it difficult to look back at a task and tell to which queue it was actually assigned.
This behavior also creates a problem when trying to launch pipelines from the UI. Here is what I do: I create a pipeline in the "draft" state with "set_default_execution_queue" set to the "default" queue. After that I go to the UI and click "NEW RUN". At this stage, looking at the "Advanced configuration" I can see the queue is indeed set to "default". I run the pipeline and it does get placed in the "default" queue as expected. Once it completes, I go and check the task that the pipeline created and the info shows "k8s_scheduler" as the queue, just like the other tasks.
Now I want to create a 2nd pipeline run from the UI with some different parameters. The behavior I see is that ClearML will use the information from the last successful pipeline task to populate the "NEW RUN" wizard and since the previous task info has the queue set to "k8s_scheduler" if I open "Advanced configuration" on this new pipeline run wizard the queue is indeed set to "k8s_scheduler" this time. The issue is basically that on every subsequent run of a pipeline you need to remember to check the "Advanced configuration" and set the queue to the appropriate one. So the "set_default_execution_queue" property does not seem to do anything after the pipeline had 1 successful run.