Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
DangerousDragonfly8
Moderator
13 Questions, 40 Answers
  Active since 10 January 2023
  Last activity 2 years ago

Reputation

0

Badges 1

38 × Eureka!
0 Votes
19 Answers
2K Views
0 Votes 19 Answers 2K Views
3 years ago
0 Votes
5 Answers
2K Views
0 Votes 5 Answers 2K Views
Hello. I have a question regarding pipeline parameters. Is it possible to reference pipeline parameters in other fields of the https://clear.ml/docs/latest/d...
2 years ago
0 Votes
1 Answers
2K Views
0 Votes 1 Answers 2K Views
Hello. When I use https://clear.ml/docs/latest/docs/references/sdk/model_outputmodel#update_weights_package with an S3 upload_uri, ClearML uploads a file cal...
2 years ago
0 Votes
1 Answers
2K Views
0 Votes 1 Answers 2K Views
2 years ago
0 Votes
3 Answers
2K Views
0 Votes 3 Answers 2K Views
Hello! When I use the TriggerScheduler class with the add_task_trigger function configured to watch for trigger_on_status=['published'] and a specific trigge...
3 years ago
0 Votes
0 Answers
3K Views
0 Votes 0 Answers 3K Views
2 years ago
0 Votes
7 Answers
2K Views
0 Votes 7 Answers 2K Views
Hello. Is it possible to show hidden projects in the UI? Like the ".pipelines" created by pipelines? Alternatively is it possible to place the pipeline tasks...
2 years ago
0 Votes
4 Answers
2K Views
0 Votes 4 Answers 2K Views
When using a TriggerScheduler with a add_task_trigger and schedule_function , how would I go about updating the trigger and function without a new task/exper...
3 years ago
0 Votes
6 Answers
2K Views
0 Votes 6 Answers 2K Views
2 years ago
0 Votes
5 Answers
2K Views
0 Votes 5 Answers 2K Views
Hi. I will try again since I got no answers previously 😁
2 years ago
0 Votes
7 Answers
3K Views
0 Votes 7 Answers 3K Views
2 years ago
0 Votes
4 Answers
2K Views
0 Votes 4 Answers 2K Views
Does anyone know if it is possible to add a plot from a stylized pandas data frame? I can easily log a pandas data frame with logger.report_table but can I l...
3 years ago
0 Votes
10 Answers
2K Views
0 Votes 10 Answers 2K Views
Can I use automation.TriggerScheduler() with add_task_trigger to trigger when a task is archived? I know that when a task is archived it gets "system_tags": ...
3 years ago
0 Hi. I Will Try Again Since I Got No Answers Previously

Alright. I will keep it in mind. Thank you for the confirmation 🙂

2 years ago
0 Hello. Is It Possible To Show Hidden Projects In The Ui? Like The ".Pipelines" Created By Pipelines? Alternatively Is It Possible To Place The Pipeline Tasks In A Project Which Is Not Hidden?

TimelyMouse69 The pipeline task(s) end up in a sub project called ".pipelines" no matter how I configure the PipelineController project name and target project. This .pipelines project is not visible from the "PROJECTS" section of the UI. You can only get to it from the PIPELINES view by clicking on "Full details" on a step.

Please see attached images

2 years ago
0 When Using A

AgitatedDove14
I do believe triggers should be unique somehow because I find them way too easy to mishandle. Especially if used with schedule_function which is defined in the same script. Updating that function requires deleting the existing trigger task first and recreating it. If not done like this you just end up with 2 trigger tasks with the same name which I assume will respond to the same event(s) but do something slightly different in response. I assume it might work like this...

3 years ago
0 Hello. I Am Running Clearml Server And Agents In K8S Using The Helm Charts. The Clearml Server Came Preconfigured With The 2 Queues: 'Default' And 'K8S_Scheduler'. I Have Created One More Queue 'Services' And Deployed 1 Agent For The 'Default' Queue And

So it seems it starts on the queue I specify and then it gets moved to the k8s_scheduler queue.

So the experiment starts with the status "Running" and then once moved to the k8s_scheduler queue it stays in "Pending"

3 years ago
0 Hello. I Am Running Clearml Server And Agents In K8S Using The Helm Charts. The Clearml Server Came Preconfigured With The 2 Queues: 'Default' And 'K8S_Scheduler'. I Have Created One More Queue 'Services' And Deployed 1 Agent For The 'Default' Queue And

JuicyFox94 since I have you, the connection issue might be caused by the istio proxy. In order to disable the istio sidecar injection I must add an annotation to the pod.
https://github.com/allegroai/clearml-helm-charts/blob/main/charts/clearml-agent/templates/agentk8sglue-configmap.yaml#L8

Unfortunately there does not seem to be any field for that in the values file.

3 years ago
2 years ago
0 Hello! When I Use The

AgitatedDove14 Thank you for the info. I will try it out.

3 years ago
0 Hello. I Have An Issue In Regards To A Task That I Run As A Service ( Should Always Run). I Run The Clearml Server And Agents In Kubernetes. I Think This Is A Design Problem With The Way Clearml Agents Run On Kubernetes. The K8S Glue Will Launch A Worker

I am trying to run with scale from zero k8s nodes for maximum cost savings. So a node should only be online if clearml actually runs a task. Waiting for the 2 hours timeout when running on expensive gpu instances for example is quite wasteful because the pipeline controller pod will keep the node online.

2 years ago
0 Hello! When I Use The

For a bit more context. Let's say I have 2 experiments in "Project MLOps" called "Exp 1" and "Exp 2". When I publish "Exp 2" I want this trigger to pick up that event and start another task in some other project. But this task would need some information about "Exp 2" like it's name, id or maybe config object etc.

Does the trigger pass any context to the task which will be executed?

3 years ago
0 Hello. I Have A Question Regarding Pipeline Parameters. Is It Possible To Reference Pipeline Parameters In Other Fields Of The

Hi SmugDolphin23 . I have tried to access node.job with a pre_execute_callback but the node object does not have the job attribute set as you can see above.

2 years ago
0 Can I Use

The alternative I can think of is to implement a clearml Monitor

3 years ago
0 Does Anyone Know If It Is Possible To Add A Plot From A Stylized Pandas Data Frame? I Can Easily Log A Pandas Data Frame With

This is what I tried and it does not work because plot is no longer a data frame object, it is now a styler . The error comes from the fact that logger.report_table wants do to fillna on the data frame object. I can't seem to find a way to have the hyperlinks embedded on the data frame object. Any suggestions?

3 years ago
2 years ago
0 Hello. I Have A Question Regarding Pipeline Parameters. Is It Possible To Reference Pipeline Parameters In Other Fields Of The

Thank you for the reply SmugDolphin23

Is there any possible workaround at the moment?

2 years ago
0 Hello I Have An Issue With The Queues. I Am Running Clearml Server + Agents In Kubernetes. Because Of That There Is A Default Internal Queue Preconfigured Called "K8S_Scheduler". I Have Defined Another Queue Called "Default" Where I Enqueue The Tasks. A

If I right click on the initial pipeline Draft and hit "Run" from there, the new run wizard is populated with the default parameters value and uses "set_default_execution_queue" as the queue under "Advanced configuration".

2 years ago
0 When Using A

Hello CostlyOstrich36 I solved it by using a .sh script locally when I want to create/update the trigger. The sh script will chain 2 py scripts together. The first py script will take care of deleting the existing running trigger task and the second py script will be the one that will recreate the trigger task with the updated code.

It just seems strange to me that you could have 2 triggers that do different things but using the same name. Nothing that can't be worked around but for automa...

3 years ago
0 Hello. I Have An Issue In Regards To A Task That I Run As A Service ( Should Always Run). I Run The Clearml Server And Agents In Kubernetes. I Think This Is A Design Problem With The Way Clearml Agents Run On Kubernetes. The K8S Glue Will Launch A Worker

Now for example the pod was killed because I had to replace the node. The task is stuck in "Running". Aborting from the UI says "experiment aborted successfully" but the state does not change.

2 years ago
0 Hello. I Have An Issue In Regards To A Task That I Run As A Service ( Should Always Run). I Run The Clearml Server And Agents In Kubernetes. I Think This Is A Design Problem With The Way Clearml Agents Run On Kubernetes. The K8S Glue Will Launch A Worker

Here is what I see as the ideal scenario:
If a worker pod running a task dies for any reason, clearml should mark the task as failed / aborted asap. Basically improve the feedback loop. Tasks running as services should be re-enqueued automatically if a the pod it runs on dies because of OOM, node eviction, node replacement, pod replacement because of autoscaling etc. You could argue the same for tasks which are not services. Restart them if their pod dies for the above reasons.

2 years ago
0 Hi. I Will Try Again Since I Got No Answers Previously

SuccessfulKoala55 So this is the intended behavior? To always have to select the queue from "Advanced configuration" on the pipeline run window even though the "set_default_execution_queue" is set to the "default" queue?

Besides the fact that tasks will always have "k8s_scheduler" as the queue in the info tab so looking back at a task you will not be able to tell to which queue it was assigned.

2 years ago
Show more results compactanswers