Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8049 Answers
  Active since 10 January 2023
  Last activity 6 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Hi, I Tried To Provide Docker Image From Pipeline Controller Task To Step Task. Before Pipe.Add_Step(), I Created The Task:

This is definitely a but, in the super class it should have the same condition (the issue is checking if you are trying to change the "main" task)
Thanks ApprehensiveFox95
I'll make sure we push a fix ๐Ÿ™‚

3 years ago
0 Hey - I'M Trying To Compare Voxel Versus Clear Ml In Image Data Exploration.

Yeah I think using voxel for forensics makes sense. What's your use case ?

one year ago
0 When I Run An Experiment (Self Hosted), I Only See Scalars For Gpu And System Performance. How Do I See Additional Scalars? I Have

BoredHedgehog47 you need to make sure "<path here>/train.py" also calls Task.init (again no need to worry about calling it twice with different project/name)
The Task.init call will make sure the auto-connect works.
BTW: if you do os.fork , then there is no need for the Task.init, the main difference is that POpen starts a whole new process, and we need to make sure the newly created process is auto-connected as well (i.e. calling Task.init)

one year ago
0 Please Tell Me, Is The Limit Of 10 Copies For Comparison, Is It Ideological Or Can It Be Changed Somehow?

I want to be able to compare scalars of more than 10 experiments, otherwise there is no strong need yet

Make sense, in the next version, not the one that will be released next week, the one after with reports (shhh do tell anyone ๐Ÿ™‚ ) , they tell me this is solved ๐ŸŽŠ

one year ago
0 If I Have A Task And A Dataset Is Being Created In A Task, How Can I Get A “Link” That This Dataset Is Created In This Task, Similar To How Model Has The Task Where It Came From

Seems like passing the Task object is not working as expected (I'll make sure it is fixed).
Try:
dataset._task.set_parent(Task.current_task().id)

3 years ago
3 years ago
0 Did Someone Here Already Try The

if I use automatic code analysis it will not find all packages because ofย 

importlib

.

But you can manually add them with Task.add_requirements, no?

3 years ago
0 Hi

thanks MagnificentSeaurchin79 , yes that makes it clear.
If that is the case, I think building a container is the easiest solution ๐Ÿ™‚
(BTW: You could also build a wheel, if you have setup.py then running is once bdist_wheel will build a wheel, and then install the wheel)

3 years ago
0 Found This Placeholder Project On Pypi:

If you mean like Canary ? then yes, but only on KFserving baclend (coming soon), since the engines themselves do not support it (this is basically a "routing" feature)

3 years ago
0 Hi Guys, I Am Having Some Problems About Parsing Arguments To A Task When Running On Agent. I Have A Task A That Creates Another Task B Via Subprocess. Some Arguments Of A And B Share The Same Names. The Problem Is That They'Re Visible To B Even Though I'

Hi @<1578193384537853952:profile|MoodyOx45>

I have a task A that creates another task B via subprocess.

So the thing about the agent, when it runs the code, there is only One task to rule them all. basically any fork/spawn of subprocess will automatically be logged as the parent Task
I think that what you want is to build a pipeline from those Tasks? Or create a Task and enqueue it manually directly from Task A?

(btw: you can forcefully cause the subprocess to create it's own Task b...

9 months ago
0 Hello, Does Clearml Have A Feature Like The Wandb'S Reports? E.G.

Full markdown edit on the project so you can create your own reports and share them (you can also put links to the experiments themselves inside the markdown). Notice this is not per experiment reporting (we kind of assumed maintaining a per experiment report is not realistic)

3 years ago
0 Prev, I Worked With Clearml (1 Year Back) And Back Then, We Config Seldon Core For The Deployment And Clearml For The Training.. Now There Is Clearml-Serving, Does It And Can It Fulfill A Similar Objective ?

Hi DeliciousBluewhale87
This is the latest clearml-serving (stable release at GTC at the end of the month)
https://github.com/allegroai/clearml-serving/tree/dev

Generally speaking, clearml-sering is a control plane, preprocessing, ML inference, with Nvidia Triton for DL inference (fully transparent).
It allows you to spin an entire fully dynamic & scalable serving on top of k8s cluster. Once you spin the base containers, you can configure them live with a CLI, this includes adding new en...

2 years ago
2 years ago
0 Hi, Is There Any Way To Get Experiment Debug Images Programmatically?

That said, it might be different backend, I'll test with the demoserver

4 years ago
0 Hi Again. As I Am Running My Experiment From Server Using Agent, I Am Failing On The Point, Where The Arguments Of Argparse Are Processed. When Is The Agent Task Registered. I Am Getting None For Task.Current_Task() At The Begining Of My Script.

(i.e. importing the trains package is enough to patch the argparser, only when you call the task.init the arguments will be logged, before they are stored in memory)

4 years ago
0 Hi, Can You Explain Me What

Hi SharpHedgehog60
Task type is another way to declare the type of processing the Task performs.
Later you can filter based on the Task type (like you would with a Tag).
For example Datasets are always of a Type "data processing"

3 years ago
0 Hi, Guys! I’M Trying To Connect Clearml To My Task And Getting Strange Error: After

DepressedChimpanzee34
I might have an idea , based on the log you are getting LazyCompletionHelp in stead of str
Could it be you installed hyrda bash completion ?
https://github.com/facebookresearch/hydra/blob/3f74e8fced2ae62f2098b701e7fdabc1eed3cbb6/hydra/_internal/utils.py#L483

3 years ago
3 years ago
0 When We Train The Models, We Often Choose Checkpoint Based On The Validation Accuracy, But Test Set Accuracy (Or Specific Class Validation Accuracy) Is Not Necessarily The Best For This Checkpoint. Right Now There Are Options To Add Columns With Max And L

Hi DilapidatedDucks58

eg, we want max validation accuracy and all other metric values for the corresponding epoch

Is this the equivalent of nested sort ?
Wouldn't you get the requested behavior if you add all metric columns but sort based on the "accuracy" column ?

3 years ago
0 Hi, Together With

To be honest, I'm not sure I have a good explanation on why ... (unless on some scenarios an exception was thrown and caught silently and caused it)

4 years ago
0 Hi, Together With

I started a last one to confirm!

You mean a second run, just to make sure ?

4 years ago
0 Hi, Together With

JitteryCoyote63 passed ?

4 years ago
0 Hey Clearml Team, We Created An Account, Setup Our Data Pipeline, And Now We Can'T Get Back In. Nothing Is In The Project. Can Someone From Support Reach Out To Help?

@<1545216077846286336:profile|DistraughtSquirrel81> shoot an email to "support@clear.ml" and provide all the information you can on the "lost account" (i.e. the one you had the data on), this means email account that created it (or your colleagues emails), and any other information that might help to locate it.

one year ago
0 [Pipeline] Hey, Is It Possible To Specify The Output Uri For Pipelines And Their Components Using Pipeline Decorators? I Would Like To Store Pipeline Artifacts And Component Artifacts On S3.

It also seems that

PipelineDecorator.upload_artifact

is not compatible with caching, sadly,

Both use the exact same mechanism of uploading artifacts (i.e. including caching for downloaded artifacts), in terms of caching pipeline components, this is on a component level (i.e. same code/task same arguments, equals cache hit)
What exactly are you getting ? how is it that the "PipelineDecorator.upload_artifact" uploads to a different storage ? is that reproducible ?

one year ago
0 Hello, I Have An Error While Installing Git Dependencies Of Local Package: So Far I Used Task.

Could you post what you see under "installed packages" in the UI ?

3 years ago
0 Is There A Way To

Ssh is used to access the actual container, all other communication is tunneled on top of it. What exactly is the reason to bind to 0.0.0.0 ? Maybe it could be a flag that you, but I'm not sure in what's the scenario and what are we solving, thoughts?

3 years ago
Show more results compactanswers