Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8122 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

25 × Eureka!
0 Clearml Pipelines Can Be Build From Tasks, Functions, And Decorated Functions, According To The Examples In

In terms of creating dynamic pipelines and cyclic graphs, the decorator approach seems the most powerful to me.

Yes that is correct, the decorator approach is the most powerful one, I agree.

2 years ago
0 Hi

DilapidatedDucks58

all our workers went down after starting the slack bot, is it expected?)

Oh dear... I can;t see any connection... What is the last log you have there?

5 years ago
0 Btw: There Seems To Be No Support For Videos In Tensorboard/Experiment View (E.G.

ReassuredTiger98 do you know if tensorboard (not tensorboardX) also supports gif there ?

4 years ago
0 Also, For Selecting A Subset Of Experiments To Compare, It Looks Like Neptune Currently Has A More Advanced Solution (

You can already sort and filter experiments based on any hyper parameter or metric that the experiment reports, there is no need for any custom language query. Also all created filter/sorted table can be shared exactly as they are, so you can create leaderboards and share specific filters. You can also use the search bar in order to filter based on experiment name / comment. Tags will be added soon as well 🙂

Example of custom columns is here (the screen grab is a bit old, now there is als...

5 years ago
0 Hi, Is It Possible To Sync Expiriment Using S3 Or Gs? I Loved To Have A Look At The Some Documentation. We Want To Sync The Training While They Are Running[Not Just When They Are Finished] Thanks,

StaleButterfly40 just making sure I understand, are we trying to solve the "import offline zip file/folder" issue, where we create multiple Tasks (i.e. Task per import)? Or are you suggesting the Actual task (the one running in offline mode) needs support for continue-previous execution ?

3 years ago
0 In Pipelinev2, Is It Possible To Register Artifacts To The Pipeline Task? I See There Is A Private Variable

If this is the case I would do:

` # Add the collector steps (i.e. the 10 Tasks
pipe.add_task(...
post_execute_callback=Collector.collect_me
)

pipe.start()
pipe.wait()
Collector.process_results(pipe) `wdyt?

3 years ago
0 Hi, I'M Having Problems With The Installed Packages When Creating An Experiment. The Installed Packages Used To Be A List With The Versions Of All The Installed Packages In The Venv. However, Now I Get The Following:

I'm assuming these are the Only packages that are imported directly (i.e. pandas requires other packages but the code imports pandas so this is what listed).
The way ClearML detect packages, it first tries to understand if this is a "standalone" scrip, if it does, than only imports in the main script are logged. Then if it "thinks" this is not a standalone script, then it will analyze the entire repository.
make sense ?

3 years ago
0 Hi, I'M Using The Autoscaler And Getting The Error

Hi CloudySwallow27

This error occurs randomly during training (in other words training does successfully start).

What's the cleamrl-agent version you are using, and the clearml version ?

3 years ago
0 [Clearml With Pytorch-Based Distributed Training} Hi Everyone! Is The Combination Of Clearml With

Hi ScantChimpanzee51
How are you launching the code ?
Basically the easiest way is to do so with the example you just mentioned,
Can this issue be reproduced ?

2 years ago
4 years ago
0 Sometimes I Notice That At The End Of An Experiment Clearml Keeps Hanging (Something With Repository Detection?) And The Script Does Not End. Do More People See This? Especially In Our Continuous Integration Pipeline This Give Problems Because Tests Are G

Thanks SolidSealion72 !

Also, I found out that adding "pool.join()" after pool.close() seem to solve the issue in the minimal example.

This is interesting, I'm pretty sure it has something to do with the subprocess not "closing" properly (or too fast or something)
Let me see if I can reproduce

3 years ago
0 Hi. Is It Possible To Run Pipelines Clearml Using Yaml Manifests Like Kubeflow Style?

The imports inside the functions are because the function itself becomes a stand-alone job running on a remote machine, not the entire pipeline code. This also automatically picks packages to be installed on the remote machine. Make sense?

2 years ago
0 Hi Everyone, I Was Looking Into Clearml Integration With Nvidia For Transfer Learning. Does Clearml Have Plans To Integrate With The New Tao? Looks Like Nvidia Is Focusing Tao As A Low Code Transfer Learning Tool With Everything Done In Command Line, Whic

Hmm interesting, I guess once you are able to connect it with ClearML you can just clone / modify / enqueue and let users train models directly from the UI on any hardware, is that the plan ?

3 years ago
0 I’M Getting These Errors When Using Agent In Docker Mode

LazyTurkey38 notice the assumption is that the docker entry-point ends with bash, and only then the agent take charge. I'm assuming this is not te case hence the agent spins the docker, then the docker just ends, could that be?

4 years ago
0 So, I Did A Slew Of Pretrainings, Then Finetuned Those Pretrained Models. Is There A Way To Go Backwards From The Finetuning Task Id To The Pretraining Task Id? What I Tried Was:

SmallDeer34 the function Task.get_models() incorrectly returned the input model "name" instead of the object itself. I'll make sure we push a fix.

I found a different solution (hardcoding the parent tasks by hand),

I have to wonder, how does that solve the issue ?

3 years ago
0 Question About Pipeline And Long-Waiting Tasks: Say I Want To Generate A Dataset. The Workflow I Have Requires

Any recommended way to make a task/pipeline “pause” until some external condition is met?

RoughTiger69 I would setup a trigger on the Dataset (i.e. new version)
https://github.com/allegroai/clearml/blob/df3d3b269acd2df0f31bfe804eb54ddc84d807c0/examples/scheduler/trigger_example.py#L44
wdyt?

3 years ago
0 Hey Everyone! Is It Possible To Trigger A Pipeline Run Via Api? We Have A Repo That Builds An Image For Serving To Clearml Server But We'Ve Wrapped It Inside A Fastapi Application So It Can Be Called From Another Web Service.

Is there any way to make that increment from last run?

pipeline_task = Task.clone("pipeline_id_here", name="new execution run here") 
Task.enqueue(pipeline_task, queue_name="services")

wdyt?

one year ago
0 Is Trains Adaptable For Federated Learning Scenarios?

Hi LazyLeopard18 ,
So long story short, yes it does.
Longer version, to really accomplish full federated learning with control over data at "compute points" you need some data abstraction layer. Without data abstraction layer, federated learning is just averaging derivatives from different location, this can be easily done with any distributed learning framework, such as horovod pr pytorch distributed or TF distributed.
If what you are after is, can I launch multiple experiments with the sam...

5 years ago
0 Hello Everyone! I'D Like To Mount Some Data On Trains Agent Into Docker Container Directory That Contains Cloned Source Code From Repo, Like This:

Hi ProudMosquito87
so you mean to mount your data folder onto the the docker so that the code could access it, correct?
If that is the case, is there a specific version not to use absolute path? (e.g. /mnt/data/mine )?

5 years ago
0 Can I Run A Random Task From A Queue? Like This

so it would be better just to use the original code files and the same conda env. if possible…

Hmm you can actually run your code in "agent mode" assuming you have everything else setup.
This basically means you set a few environment variables prior to launching the code:
Basically:
export CLEARML_TASK_ID=<The_task_id_to_run> export CLEARML_LOG_TASK_TO_BACKEND=1 export CLEARML_SIMULATE_REMOTE_TASK=1 python my_script_here.py

3 years ago
Show more results compactanswers