Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8049 Answers
  Active since 10 January 2023
  Last activity 5 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Hi, Is There Any Way To Get Experiment Debug Images Programmatically?

Okay verified, it won't work with the demo server. give me a minute πŸ™‚

4 years ago
0 Hi, Is There Any Way To Get Experiment Debug Images Programmatically?

That said, it might be different backend, I'll test with the demoserver

4 years ago
0 Hi, I Have Several Long Running Experiments Failing With

That makes total sense, this is exactly an OS scenario for signal 9 πŸ™‚

3 years ago
0 Hi. Inside A Notebook When I Cerate A New Clearml Task And Then Run Sklearn Gridsearchcv , Clearml Uploads A Lot Of Model. Is There A Way To Force Clearml Not To Upload These Models? Related Question Is What Are These Models Anyway? Their Name Only Contai

Is there a way to force clearml not to upload these models?

DistressedGoat23 is it uploading models or registering them? to disable both set auto_connect_frameworks https://clear.ml/docs/latest/docs/clearml_sdk/task_sdk#automatic-logging

Their name only contain the task name and some unique id so how can i know to which exact training

You mean the models or the experiments being created ?

one year ago
0 Hello! Question About

Hi @<1547028116780617728:profile|TimelyRabbit96>

Trying to do model inference on a video, so first step in

Preprocess

class is to extract frames.

Basically this depends on the RestAPI, usually would will be sending a link to data to be processed and returned Synchronously
What you should have a custom endpoint doing the extraction, send Raw data into another endpoint doing the model inference, basically think "pipeline" end points:
[None](https://github.com/allegro...

one year ago
0 Hi All! I Have A Question About Pipelines. My Pipeline Consists Of Several Steps:

GrotesqueDog77 one issue with this design, in order to run a sub-component, the call must be done from the parent component, does that make sense?

` def step_one(data):
return data

def step_two(path):
return model

def both_steps()
path = step_one("stuff")
return step_two(path)

def pipeline():
both_steps() Which would make both_steps ` a component and step_one and step_two sub-components
wdyt?

one year ago
0 Hi All! I Have A Question About Pipelines. My Pipeline Consists Of Several Steps:

because step can be constructed with multiple

sub-components

but not all of them might be added to the UI graph

Just to make sure I fully understand when we decorate with @sub_node we want that to also appear in the UI graph (and have it's own Task / metrics etc)
correct?

one year ago
0 Hi All! I Have A Question About Pipelines. My Pipeline Consists Of Several Steps:

Yes, but I'm not sure that they need to have separate task

Hmm okay I need to check if this can be easily done
(BTW, the downside of that, you can only cache a component, not a sub-component)

one year ago
0 Hi All! I Have A Question About Pipelines. My Pipeline Consists Of Several Steps:

Sounds good to me, adding it to the to do list, probably should not be very complicated to add πŸ™‚

one year ago
0 Hi, I'D Like To Know If There Is A Way To Include A Process Like Aws Autoscaler And Its Configurations Inside The Clearml Helm Chart. My Goal Is To Automatically Run The Aws Autoscaler Task On A Clearml-Agent Pod When I Deploy The Clearml Services On The

but I'd prefer to have a new instance deployed for each new experiment and that it also terminates when no new experiments are queued

I'm not objecting, just wondered on the rational behind the decision πŸ™‚
Back to the AWS autoscaler:
Basically if you have the services-agent running on your cluster, it will just run the aws-autoscaler for you πŸ™‚
The idea of the service-agent is to run logic/monitoring Tasks suck as the aws autoscaler. Notice that service-mode means multiple job per...

3 years ago
0 Hi! How Can I Report A Bar Plot? The First Thing That Came To Mind Is Using Plot Histogram But It Supports Providing The Y-Axis Values, In My Case I Also Have X-Axis Values For The Bar Plot (Which Are Strings). How Can This Be Accomplished?

SmarmySeaurchin8
Something like this one:
vector_series = np.random.randint(10, size=10).reshape(2,5) logger.report_vector(title='vector example', series='vector series', values=vector_series, iteration=0, labels=['A','B'], xaxis='X axis label', yaxis='Y axis label')

3 years ago
3 years ago
0 I Am Hosting Clearml Server And I Faced Issue With Closing Datasets. For Some Reason Closing Datasets Ends Up With The Word "Killed" For Datasets More Than 2.5Gb (See Screenshot) The Question Is What Is The Reason Of The Issue? How To Upload Datasets Size

Hi SmugLizard24

The question is what is the reason of the issue?

That is a good question, could it be out of memory? (trying to compress or send the file in one chunk?)

3 years ago
0 Hello, I Have Two Questions About Taskscheduler.

Hi ScaryBluewhale66

TaskScheduler I created. The status is still

running

. Any idea?

The TaskScheduler needs to actually run in order to trigger the jobs (think cron daemon)
Usually it will be executed on the clearml-agent services queue/mahine.
Make sense ?

2 years ago
0 Hi Guys Right Now I Prepared My Experiment Located In This Notebook:

It’s the correct way to do it, right?

Yep πŸ™‚ that said this is not running as a service you will need to spin it on your machine. that said you can definitely connect it with the free SaaS server, and spin the serving on your machine with docker-compose

2 years ago
0 Hi Guys Right Now I Prepared My Experiment Located In This Notebook:

Hi CheekyAnt38

However now I would like to evaluate directly my machine learning model via api requests, directly over clearml. It’s possible?

This basically means serving the model, is this what you mean?

2 years ago
0 Hey Since Hydra Does Not Work With

I see TrickyFox41 try the following:
--args overrides="param=value"Notice this will change the Args/overrides argument that will be parsed by hydra to override it's params

one year ago
0 How Can I Tell Clearml-Agent Not To Run Pip Install Unless My Requierments.Txt File Was Changed. It Seems To Run Pip Install Every Time I Run A Task Although Nothing Have Changed...

@<1577468638728818688:profile|DelightfulArcticwolf22>

How can I tell clearml-agent not to run pip install unless my requierments.txt file was changed.

the agent has built in cache, it will reuse the previous venv if nothing changed (cache local on the agent's machine).
Make sure this is line is not commented :
None

one year ago
0 Quick Qn, When Using The Clearml-Task, How To Specify The Output_Uri.

yup, i updated this in my local clearml.conf... Or should be updating this elsewhere as well

On the agent's machine, you should update the default_output_uri. Make sense ?

3 years ago
0 Any Info On The Lifecycle Of Datasets Downloaded To $Home/.Clearml/Cache/Storage_Manager/Datasets Via Get_Local_Copy I Have A Task Running And I Was Watching The Above Path And Datasets Were Being Downloaded And Then They Are All Removed And For A Partic

Hmm, Notice that it does store sym links to parent data versions (to save on multiple copies of the same file). If you call get_mutable_local_copy() you will get a standalone copy

3 years ago
2 years ago
Show more results compactanswers