Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8049 Answers
  Active since 10 January 2023
  Last activity 5 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Hi, Can We Search Tasks Using Wildcard In The Webapp. Say I Have Task Names

Yes, actually the first step would be a toggle button for regexp in the search, the second will be even more advanced search.
May I suggest you post it on the UI suggestion issue https://github.com/allegroai/trains/issues/81 ?

3 years ago
0 When I Run An Experiment (Self Hosted), I Only See Scalars For Gpu And System Performance. How Do I See Additional Scalars? I Have

BoredHedgehog47 you need to make sure "<path here>/train.py" also calls Task.init (again no need to worry about calling it twice with different project/name)
The Task.init call will make sure the auto-connect works.
BTW: if you do os.fork , then there is no need for the Task.init, the main difference is that POpen starts a whole new process, and we need to make sure the newly created process is auto-connected as well (i.e. calling Task.init)

one year ago
0 Hello, I'M Using A Virtual Environment Inside My Jupyterhub Server Along With Clearml. Whenever I Create Any Task The "Uncommitted Changes" Are The Contents Of

It is deployed on an on premise, secured network that has no access to the outside world.

Is it password protected or something of that nature?

Perhaps we could find a different solution or work around, rather than solving a technical issue.

Solving it means allowing the python code to ask the JupyterLab server for the notebook file

However, once working with ClearML and using a venv (and not the default python kernel),

Are you saying on your specific setup (i.e. OpenShif...

one year ago
0 When I Run An Experiment (Self Hosted), I Only See Scalars For Gpu And System Performance. How Do I See Additional Scalars? I Have

Maybe before everything else, can you share some background on the rational if starting a new sub process?

one year ago
0 Hi There

set a parameter in that task and enqueue it

how do you do that?

4 years ago
0 Hi, Trying To Understand Clearml-Session. I Have An Agent Running On A Machine Monitoring A Queue Then I Ran Clearml-Session --Queue Myqueu --Docker Torch-Image. The Clearml Session Ended Up Tunneling Into The Physical Machine That My Agent Is Running

Hi, I was expecting to see the container rather then the actual physical machine.

It is the container, it should tunnels directly into it. (or that's how it should be).
SSH port 10022

3 years ago
0 Hi There

when you clone the Task, it might be before it is done syncying git / packages.
Also, since you are using 0.16 you have to have a section name (Args or General etc.)
How will task b use the parameters ? (argparser / connect dict?)

4 years ago
0 Hi There

task._wait_for_repo_detection()You can use the above, to wait until repository & packages are detected
(If this is something users need, we should probably make it a "public function" )

4 years ago
0 Hi There

Hmmm that is odd... based on the reply "'Task' object has no attribute 'hyperparams'", I would assume API version is lower then 2.9. But you specifically said you see Session.api_version == 2.9 is that correct?

4 years ago
0 Hi There

JitteryCoyote63 is it the same issue?

4 years ago
0 Hi There

JitteryCoyote63 try to add the prefix to the parameter name, e.g. instead of "artifact_name" use "Args/artifact_name"

4 years ago
0 Hi There

Also, we added Task.update_task, a nicer way to change the script section ๐Ÿ™‚

4 years ago
0 Hi There

JitteryCoyote63 do you have an idea on how I can reproduce it?

4 years ago
0 Hi, We Have Been Using Clearml In Our Development Environment To Train Our Models And Benchmarking Them. I Was Wondering What Is Clearml'S Role In Transition To (Production. Two Specific Points, Deployment, And Automated Retraining Pipeline.

Hi SubstantialElk6

Generically, we would 'export' the preprocessing steps, setup an inference server, and then pipe data through the above to get results. How should we achieve this with ClearML?

We are working on integrating the OpenVino serving and Nvidia Triton serving engiones, into ClearML (they will be both available soon)

Automated retraining

In cases of data drift, retraining of models would be necessary. Generically, we pass newly labelled data to fine...

3 years ago
0 Hello All, I Have A Question Regarding Showing Of Debug Samples Within An On-Prem Clearml Instance. I Am Logging Debug Images Via Tensorboard (Via

I am logging debug images via Tensorboard (via

add_image

function), however apparently these debug images are not collected within fileserver,

ZanyPig66 what do you mean not collected to the file server? are you saying the TB add_image is not automatically uploading images? or that you cannot access the files on your files server?

one year ago
2 years ago
0 I Have A Local Folder A, And A Dataset B. A:

As a result, I need to do somethig which copies the files (e.g. cp -r or StorageManager.upload_folder(โ€˜bโ€™, โ€˜aโ€™)
but this is expensive

You are saying the copy is just wasteful (but you do have the files locally)?

2 years ago
0 Hi All! Let'S Say I Have Two Functions Decorated With

GiganticTurtle0 in the PipelineDecorator.component , did you pass helper_functions=[] with refrence to all the sub component ?

2 years ago
0 Hi All! Let'S Say I Have Two Functions Decorated With

I think that listing them all would just clutter up the results tab for that pipeline task

Can you share a screen so we better understand the clutter ?
Also "1000 components" ?! and not using them ? could you expand on how/why?

2 years ago
0 Hi, Does Anyone Use Mlflow / Weight & Biases /

Apparently it ignores it and replaces everything...

4 years ago
0 Hi All! Let'S Say I Have Two Functions Decorated With

Only those components that are imported in the script where the pipeline is defined would be included in the DAG plot, is that right?

Actually the way it works currently (and we might change it if there is a better way), every time you call PipelineDecorator.component a new component is stored on the Pipeline Task, which is later translated into DaG graph and Table (next version will have a very nice UI to display / edit them).
The idea is first to have a representation of the p...

2 years ago
0 Is It Possible To Perform Debugging Operations With Pycharm Integration Using Remote Session?

is it possible to perform debugging operations with pycharm integration using remote session?

Sure, use clearml-session it will open an ssh connection to the remote machine, then you can use pycharm

2 years ago
0 Hello, I Am New To Clearml, I Would Like To Learn More About How Clearml Works On A Hpc Cluster Where The Only Way To Get Computational Resources Is Via Slurm:

Hi UnsightlyLion90

from my understanding agent do the job of SLURM,

That is kind of correct (they overlap in some ways ๐Ÿ™‚ )

Any guide of how to integrate both of them?

The easiest way is to just add the "Task.init()" call to your code, and use SLURM to schedule the job. this will make sure all jobs are fully logged (this can also includes automatically uploading the models, and artifact support etc)
Full SLURM support (i.e. similar to the k8s glue support), is currently ou...

3 years ago
0 Hi! Can Someone Show Me An Example Of How

Well, PipelineDecorator actually allows you to do the same thing, with the same ability that is clone / modify / enqueue.
(I mean, Pipeline with tasks is also great, I just want to clarify that they have the same capabilities in this respect).

2 years ago
0 Hi! Can Someone Show Me An Example Of How

. I was wondering what is the use ofย 

PipelineController.create_draft

ย if you can't use it to clone and run tasks, as we have seen

I think the initial thought was to allow to create a pipeline from a pipeline programatically. Then once you have the "pipeline" you can manually enqueue it and modify it. Think a pipeline constructing other pipelines in flight based on some logic, then launching them in parallel.
make sense ?

2 years ago
Show more results compactanswers