Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8122 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

25 × Eureka!
0 Do You Have Any Base Image Recommendation To Install Clearml Python Library? I'M Getting Error With Pip On Python:3.9.11-Alpine Image.

actually no it is not, alpine is Not a good baseline, is is very very slim missing a ton of stuff.
I would use bullseye or slim (depending how many aux things you need on the container)
https://hub.docker.com//python/tags?page=1&name=bullseye
https://hub.docker.com/
/python/tags?page=1&name=slim-bullseye

2 years ago
0 I Have A Pipeline With Tasks A->B->C. I Want To Be Able To Trigger It Manually, And Skip A Regardless Of It’S Cache Status. I Want To Pass B Value That Represents A’S Output If Needed. What’S A Good Way To Achieve This (Can Be Ui-Based, Or Pipeline-Gymnas

Decorators are good 🙂
Something along the lines of
` @PipelineDecorator.pipeline(...)
def pipeline(skip_a=False):
if not skip_a:
a = step_a()
else:
# somehow get a previous A?
# let's call it cached A
a = "replace with real'

step_b(a)
... `Is this the gist?
If it is, this looks like, "how can I control whether A is cached or not", is that correct?

3 years ago
0 Hi, Kudos For The 0.15 Guys! I Am Having An Issue Related To Git Auth: I Have An Issue With Trains-Agent (0.15): It Does Not Use Git Creds While Trying To Clone A Private Repo:

unless the domain is different

 ?

Imagine that you are working with both github and bitbucket for example, if you are using git-ssh than git will know which of the domains to send the key to. Currently there is a single user/pass entry so all domains will get the same credentials. But I think this is a rare use case.

5 years ago
0 Hi, I'M Trying To Use

SoggyBeetle95 the question is, where does clearml stores these arguments, and the answer is on the Task object (from there the agent will take them and apply to the docker execution). Now since all users see all the tasks, they also see these arguments. Wdyt?

3 years ago
0 Hi! I Have A Freshly Deployed Clearml Instance. In The Docs I Found A Phrase

Hi @<1547390415320125440:profile|SilkySparrow85>

because it is trying to send a debug-sample to fileserver!

Yes, you should always configure the "files server" to point to your minio S3, basically:
None

files_server: "
"

But do not forget to also configure the credentials here:
[None](https://github.com/allegroai/clearml/blob/40c6db9d95016382c721546d42...

2 years ago
0 I Am Using Opennmt-Tf (2.18.1) And Clearml (1.1.2) For Training And Testing My Translation Models. I Am Wanting To Register The Incremental Bleu Scores And Final Test Data With Clearml (For Plotting, Comparison, Etc.), But It Is Not Working. I Cannot Fi

From the docs I think what's going on is that the https://opennmt.net/OpenNMT-tf/package/opennmt.Runner.html#opennmt.Runner.train is spinning a new subprocess, and the training itself happens on the subprocess.
If this is the case this will explain the lack of automagic, as the subprocess is lacking the "Task.init" call
wdyt, could that be the case ?

3 years ago
0 Is There A Reason Why All Clearml.Task Methods Regarding Requirements (E.G. Pip Requirements) Are Class Methods? Are Requirements Not Stored In A Task?

Too late for what?

To update the task.requirements before it actually creates it (the requirements are created in a background thread)

4 years ago
0 I'M Running A Simple Experiment (One Training Task, Nothing Else) And I'M Getting A Puzzling Message. Any Help Deciphering That Is Appreciated. I'M Pasting Part Of The Warnings Below:

Actually scikit implies joblib 🙂 (so you should use scikit, anyhow I'll make sure we add joblib as it is more explicit)

3 years ago
0 Hey All, I'M Testing The Usage Of

Would that go under

arguments

?

yes 🙂

Also what is the base path where the git repo is cloned? So if my repo is called myProject.git, what would the full path be?

For example https://github.com/ <user>/myProject.git
btw: how come you do not have this field auto populated from running the code locally or using clearml-task CLI?

2 years ago
0 Hi, I Have A Future Roadmap Question On Clearml-Datasets. The Current Implementation Works Well For Small Datasets But Its Rather In Effective For Very Large Datasets. For Example, Let'S Say I Have 10 Million Images Just For The Training Dataset, And My T

SubstantialElk6 I just realized 3 weeks passed, wow!
So the good news we have some new examples:
https://github.com/allegroai/clearml/blob/master/examples/pipeline/pipeline_from_decorator.py
https://github.com/allegroai/clearml/blob/master/examples/pipeline/pipeline_from_functions.py
The bad news the documentation was postponed a bit, as we are still messaging the interface (the community is constantly pushing for great ideas and uses cases , and they are just too good to miss out 🙂 )...

3 years ago
0 Hi, I Faced With A Silly Error, When I Run The Python Script With Task = Trains.Init(Project_Name='My Project', Task_Name='My Task'). The Task Goes To The Trains Server, But In The Trains Server, In Installed Packages Part One Of The Line

I think it fails because it tries to install trains twice. Could you remove the trains package, and test? I'm also curious how do you have both installed?!

5 years ago
0 I’M Getting These Errors When Using Agent In Docker Mode

might it be related to the docker socket not being mounted to the agent daemon running inside a docker container?

Oh yes, if the daemon is running Inside a docker container than you need both --privileged and mounting of the docker socket, to get it to work

4 years ago
0 Hi, I Try To Write An Article On Medium About Clearml And Face Some A Problem With Plotly Figures. When Displaying The Figure Locally In A Browser Works Fine, But On The Cleaml Server (I Use The Free Tier Service) The Plot Is Empty And Has The Title 'Unkn

WickedGoat98 Same for me, let me ask the UI guys, I think this is a UI bug.
Also maybe before you post the article we could release a fix to both, what do you think?
EDIT:
Never mind 🙂 i just saw the medium link, very cool!!!

4 years ago
0 My Nth Question For The Day

Is there a way to do this all elegantly?

Of yes there is, this is how TaskB code will look:

` task = Task.init(..., 'task b')
param = {'TaskA' :'TaskAs ID HERE'}
task.connect(param)
taska_model = Task.get_task(param['TaskA']).models['output''][-1]
torch.load(taska_model.get_local_copy())

train

torch.save('modelb') `I might have missed something there, but generally speaking this will let you:
Select TASKA as a parameter of TaskB training process Will register automagically Tasks'A...

4 years ago
0 Hi, Together With

JitteryCoyote63 while it's running, could you give me a few details on the setup, maybe I can reproduce it.
Is it using pytorch distributed ?
Are all models uploaded to S3 ?
etc.

5 years ago
0 Hi I Came Across Some Inconsistency In The Iteration Reporting In The Clearml With Pytorch-Lightning When Calling Trainer.Fit Multiple Times, Before I Dive In I Wondered If There Is A Known Issue Related To This?

but the debug samples and monitored performance metric show a different count

Hmm could you expand on what you are getting, and what you are expecting to get

4 years ago
0 [Clearml Task Querying] How Would I Find Tasks That Have The Same Code With Different Inputs/Parameters? I’M Interested In “Diff”Ing The Inputs To/Outputs From A Task To Do Pipeline “Caching” In A More Intelligent Way (For My Use Case) Than Clearml Does B

Hi ReassuredOwl55

How would I find Tasks that have the same code with different inputs/parameters?

Assuming you have the git repo
you can do:
Task.query_tasks(..., task_filter={'_all_'=dict(fields=['script.repository'], pattern='github.com/user/repo'))wdyt?

2 years ago
0 How Can I Integrate Trains-Server To Aws Ec2 Api

Hi AstonishingSwan80 , what do you mean by "ec2 API"?

5 years ago
0 "5451Af93E0Bf68A4Ab09F654B222Ccae": { "1B790A3Da2E8D6Cd939Cf271694Fe81B": { "Metric": ":Monitor:Gpu", "Variant": "Gpu_0_Utilization", "Value": 0.0, "Min_Value": 0.0,

Is gpu_0_utilization also in % then?

Correct 🙂

I was trying to find, what are those min and max value for above metrics.

Oh that makes sense, notice that you can get the values over time, so you can track the usage over the experiment lifetime (you can of course see it in the Scalar tab of the experiment)

2 years ago
0 Regarding The New Version 1.1.2, I Have Noticed Type Hints Are Now Included In The Script Generated By

. However, despite having imported the required types from the 

typing

 library in the script where the function decorated with 

PipelineDecorator.component

 is defined, later in the generated script the 

typing

 library is not imported outside the scope of the function

Actually the typing part is not passed to the "created step" , because there are no global imports, for eexample:
` def step(a: pd.DataFrame):
import pandas as pd
...

3 years ago
0 Hello. I Have Several Questions Regarding The Pipeline Components Of Clearml. I Have Read The Docs, But I Still Don'T Have A Clear Picture Of The Interplay Between Them. As I Know A Little Bit Better Luigi And Kedro, I Will Try To Explain How Are They Rel

Hi ShinyWhale52
Luigi's approach is basically an extension of a functional dag, where each node is a single function. Let's think of Kedro as extension of this approach.
With both the assumption is that a node is a single function (sometimes it really is) and we just want to create a meta execution path (i.e. the execution dag, quite similar to TF v1).
ClearML pipelines are a different story (in a way).
The main difference is that with ClearML each node is a Task, not a function. That mean...

4 years ago
0 Hi All, I Am Starting To Use Clearml-Agent. Run It With

Okay we have something 🙂
To your clearml.conf add:
agent.docker_preprocess_bash_script = [ "su root", "cp -f /root/*.conf ~/", ]Let's see if that works

4 years ago
0 When Running In

the use case i have is to allow people from my team to run their workloads on set of servers without stepping over each other..

So does that mean CPU only workloads?
Also are we afraid of fairness? (i.e. someone "taking" all the CPU for themselves)

5 years ago
0 Hey, Great Product! I'Ve Installed Trains Agent On A Python3 Venv, But When I Run A Script On The Worker, It Calls Python2 Instead Of Python 3. How To Change It?

Hi VivaciousWalrus99
Could you attach the log of the run ?
By default it will use the python it is running with.
Any chance the original experiment was executed with python2 ?

4 years ago
0 Does K8S Glue Support Running Service Agent? Slightly Confused Here

you mean to spin a pod with the agent inside it (daemon in services mode).
Or connect the services queue to the k8s cluster (i.e. define the pod template that uses cpu with not a lot of ram)?

4 years ago
Show more results compactanswers