Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8049 Answers
  Active since 10 January 2023
  Last activity 6 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Hi

It all depends how we store the meta-data on the performance. You could actually retrieve it from the say val metric and deduce the epoch based on that

4 years ago
0 Hi

I'd prefer to use config_dict, I think it's cleaner

I'm definitely with you

Good news:

new 

best_model

 is saved, add a tag 

best

,

Already supported, (you just can't see the tag, but it is there :))

My question is, what do you think would be the easiest interface to tell (post/pre) store, tag/mark this model as best so far (btw, obviously if we know it's not good, why do we bother to store it in the first place...)

4 years ago
0 Hello, I Am Trying To Retrieve A Simple Dict Artifact Uploaded In A Previous Task With

JitteryCoyote63 okay... but let me explain a bit so you get a better intuition for next time 🙂
The Task.init call, when running remotely, assumes the Task object already exists in the backend, so it ignores whatever was in the code and uses the data stored on the trains-server, similar to what's happening with Task.connect and the argparser.
This gives you the option of adding/changing the "output_uri" for any Task regardless of the code. In the Execution tab, change the "Output Destina...

4 years ago
0 Hello, I Am Trying To Retrieve A Simple Dict Artifact Uploaded In A Previous Task With

JitteryCoyote63 with pleasure 🙂
BTW: the Ignite TrainsLogger will be fixed soon (I think it's on a branch already by SuccessfulKoala55 ) to fix the bug ElegantKangaroo44 found. should be RC next week

4 years ago
0 Hey, I Hope This Is The Right Place To Ask. We'Re A Small Data Science Team That Wants To Log Everything About Our Ml Models. Looking Around On The Internet, Mostly Mlflow Is Being Recommended, But Occasionally The Name Trains Pop-Ups. According To You,

JitteryCoyote63

I agree that its name is not search-engine friendly,

LOL 😄
It was an internal joke the guys decided to call it "trains" cause you know it trains...
It was unstoppable, we should probably do a line of merchandise with AI 🚆 😉
Anyhow, this one definitely backfired...

4 years ago
0 Hi I’M Trying Out Pipeline Controller From Tasks. I Was Not Able To Understand Why My Code Results In Just One Task(The First One) In The Pipeline.

UpsetBlackbird87
pipeline.start()Will launch the pipeline itself On a remote machine (a machine running the services agent).
This is why your pipeline is "stuck" it is not actually running.
When you call start_lcoally() the pipeline logic itself is runnign on your machine and the nodes are running on the workers.
Makes sense ?

2 years ago
0 Hi There :) Can Anybody Tell Me What The Best Practice Is For Performing A Normalization In The Preprocess.Py Script Used By Clearml-Serving? Currently I Use A Sklearn Minmaxscaler Which Is Loaded And Applied Before And After The Data Is Send To The Model

Hi @<1526371965655322624:profile|NuttyCamel41>

. I do that because I do not know how to get the pickle file into the docker container

What would the pickle file do?

and load the MinMaxScaler within the script, as the sklearn dependency is missing

what do you mean by that? are you getting an error when loading your model ?

one year ago
0 Hello. Am New To Clearml. I Wish To Know If There Are Clearml Support For Nvidia Tao (Formerly Known As Transfer Learning Toolkit) ? Thank You

My current experience is there is only print out in the console but no training graph

Yes Nvidia TLT needs to actually use tensorboard for clearml to catch it and display it.
I think that in the latest version they added that. TimelyPenguin76 might know more

2 years ago
0 Hi, I Had A Task Successfully Completed. Then I Cloned It And Enqueued It Again Without Any Changes. But The Task Ends Up With An Error. Here'S The Logs, Not Sure What Went Wrong.

Could not install packages due to an EnvironmentError: [Errno 2] No such file or directory: '/tmp/build/80754af9/attrs_1604765588209/work'Seems like pip failed creating a folder
Could it be you are out of space ?

3 years ago
0 Is There A Way To Set Precedence On Package Managers? If We Set An Agent To Use

UnevenDolphin73 if the repo does not include a poetry file it will revert to pip

2 years ago
0 Is There A Way To Set Precedence On Package Managers? If We Set An Agent To Use

poetry

 stores git related data in ... you get an internal package we have with its version, but no git reference, i.e. 

internal_module==1.2.3

 instead of 

internal_module @H4dr1en

This seems like a bug with poetry (and I think I have run into this one), worth reporting it, no?

2 years ago
0 Is There A Way To Set Precedence On Package Managers? If We Set An Agent To Use

Local changes are applied before installing requirements, right?

correct

2 years ago
0 Is There A Functionality To See The Dependency Structure Of Datasets? Or Has Anyone Written A Script For This?

EnormousWorm79 you mean to get the DAG graph of the Dataset (like you see in the plots section)?

2 years ago
0 Hi I Was Running An Hyperparameter Optimization Task Using The Optuna Optimizer And Even Though The Hyperparameteroptimizer’S Argument Is Set To

Hi UpsetBlackbird87
This is an Optuna decision on how many concurrent tests to run simultaneously.
You limited it to 100, but remember Optuna does a Bayesian optimization process, where it decides on the best set of arguments based on the performance of the previous set, this means it will first try X trials, then decide on the next batch.
That said you can a pruner to Optuna specifying how it should start
https://optuna.readthedocs.io/en/v1.4.0/reference/pruners.html#optuna.pruners.Median...

2 years ago
0 Hi, I Have A Question Regarding The Autoscaler. I Implemented A Custom Driver For Gcp And I Manager To Launch The Clearml.Automation.Auto_Scaler.Autoscaler Which Runs Smoothly (Kudos!!). I Can See Instance Being Created/Destroyed On Demand As Expected. Th

Hi @<1523715429694967808:profile|ThickCrow29>

clearml.automation.auto_scaler.AutoScaler which runs smoothly (kudos!!).

NICE!

The only thing I am missing is the in the clearml dashboard/orchestration --> Is there a way to make it

hmm kind of needs backend support for that 😞

For now, I can just see the log of the clearML task to monitor what’s happening
Or is this retricted to pro user ?

Yeah the GCP and AWS autoscalers dashboards are paid tier feature. But...

10 months ago
0 Hello Guys, I Have A Strange Situation With A Pipeline Controller I'M Testing Atm. If I Run The Controller Directly In My Pycharm On Notebook It Connects Correctly To The K8S Cluster With Trains Installed. After This, If I Go Directly In The Ui, I Reset T

My bad, there is a mixture in terms.
"configuration object" is just a dictionary (or plain text) stored on the Task itself.
It has no file representation (well you could get it dumped to a file, but it is actually stored a s a blob of text on the Task itself, at the backend side)

3 years ago
0 What Happens To File That Are Downloaded To A Remote_Execution Via Storagemanager? Are They Removed At The End Of The Run, Or Does It Continuously Increases Disk Space?

UnevenDolphin73

we'd like the remote task to be able to spawn new tasks,

Why is this an issue? this should work out of the box ?

2 years ago
0 I Have Code That Does Torch.Load(Path) And Deserializes A Model. I Am Performing This In Package A.B.C, And The Model’S Module Is Available In In A.B.C.Model Unfortunately, The Model Was Serialized With A Different Module Structure - It Was Originally Pla

Hi RoughTiger69

unfortunately, the model was serialized with a different module structure - it was originally placed in a (root) module called

model

....

Is this like a pickle issue?

Unfortunately, this doesn’t work inside clear.ml since there is some mechanism that overrides the import mechanism using

import_bind

.

__patched_import3

What error are you getting? (meaning why isn't it working)

2 years ago
0 I Have Code That Does Torch.Load(Path) And Deserializes A Model. I Am Performing This In Package A.B.C, And The Model’S Module Is Available In In A.B.C.Model Unfortunately, The Model Was Serialized With A Different Module Structure - It Was Originally Pla

it is a pickle issue
‘package model doesn’t exist’

Sounds like it, why do you think clearml has anything there ?
BTW:

import_bind

.

__patched_import3

this is just so when packages that clearml autoconnects with are patched if imported After Task.init was called.

2 years ago
0 Hi There, Are There Any Plans To Add Better Documentation/Examples To

hi ElegantCoyote26

but I can't see any documentation or examples about the updates done in version 1.0.0

So actually the docs are only for 1.0... https://clear.ml/docs/latest/docs/clearml_serving/clearml_serving

Hi there, are there any plans to add better documentation/example

Yes, this is work in progress, the first Item on the list is custom model serving example (kind of like this one https://github.com/allegroai/clearml-serving/tree/main/examples/pipeline )

about...

2 years ago
0 What Happens If The Task.Init Doesn'T Happen In The Same Py File As The "Data Science" Stuff I Have A List Of Classes That Do The Coding And I Initialise The Task Outside Of Them. Something Like

I am actually saving a dictionary that contains the model as a value (+ training datasets)

How are you specifically doing that? pickle?

2 years ago
0 What Happens To File That Are Downloaded To A Remote_Execution Via Storagemanager? Are They Removed At The End Of The Run, Or Does It Continuously Increases Disk Space?

Honestly, this is all related to issue #340.

makes total sense.
But actually this id different from #340. The feature is to store the Data on the Task, this means each Task in your "pipeline" will be upload a new copy of the data. No?

I'd suggest some 

task.detach()

 method for remote execution maybe

That is a good idea, in theory it can also be used in local execution

2 years ago
Show more results compactanswers