Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8048 Answers
  Active since 10 January 2023
  Last activity 5 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Hey Guys While Trying To Serve, Following:

Hi @<1544853695869489152:profile|NonchalantOx99>
I would assume the clearml-server configuration / access key is misconfigured in your copy of example.env

one year ago
0 Hi, I Have A Script Running Cross Validation, Basically It Calls 5 Times (5 Folds) Another Script That Does A Training And Evaluation. Is It Possible In Clearml To Have A Main Task (The Complete Cross Validation) And Subtasks (One For Each Fold)?

Nested in the UI is not possible I think?

Yes, but the next version will have nested projects, that's something 🙂

I mean that it is possible to start the subtask while the main task is still active.

You cannot call another Task.init while a main one is running.
But you can call Task.create and log into it, that said the autologging is not supported on the newly created Task.

Maybe the easiest solution is just to do the "sub-tasks" and close them. That means the main Task i...

3 years ago
0 Hey! I Just Finished The Movie

The first pipeline
 step is calling init

GiddyPeacock64 Is this enough to track all the steps?
I guess my main question is every step in the pipeline an actual Task/Job or is it a single small function?
Kubeflow is great for simple DAGs but when you need to build more complex logic it is usually a bit limited
(for example the visibility into what's going on inside each step is missing so you cannot make a decision based on that).
WDYT?

3 years ago
0 Slack Admins Will Create A

Hi CheerfulGorilla72
see
Notice all posts on that channel are @ channel 🙂

2 years ago
0 Hi, Expanding On

Regrading the limit interface, let me check I think this is worked on (i.e. nice interface that should be pushed in the next few days). Let me get back to you on this one.

How will imposing an instance limit , prevent or allow --order-fairness feature for example, which exists when running in clearml-agent version compared to k8s_glue_example version ?

A bit of background on how the glue works:
It pulls jobs from the clearml queue, then it prepares a k8s job, and launches the k8s jobs...

3 years ago
0 Hi, Love What You Guys Did With The New Datasets! I Need Some Help Though. I Assume There Will Be A No-Code Way To Do This, Maybe Not Now But In The Future. But Anyway, I Have Three Different Datasets, And I Want To Create A Merged Version Of All Three Of

but can it NOT use /tmp for this i’m merging about 100GB

You mean to configure your Temp folder for when squashing ?
you can do hack the following:
` import tempfile
tempfile.tempdir = "/my/new/temp"

Dataset squash

tempfile.tempdir = None `But regradless I think this is worth a GitHub issue with feature request, to set the temp folder///

2 years ago
0 Hi!

generally speaking the agent will convert the repo url to the auth scheme it is configured with, ssh->hhtp if using user/pass, and http->ssh if using ssh

3 years ago
0 Is There Any Examples Of Mounting An Aws Efs Mount To A Self Hosted K8 Agent Deploy?

. Curious what advantage it would be to use the StorageManager

Basically if you set the clearml cache folder to the EFS, users can always do:
from clearml import StorageManager local_file = StorageManager.get_local_copy(" ")where local_file is stored on persistent cache (EFS) and the cache is automatically cleaned based on last accessed file

one year ago
0 Hi Folks, I Have A Question Related To The Storage Of Artifacts, As It Is Not Entirely Clear To Me Where To Configure It. If I Read The Documentation

but DS in order for models to be uploaded,
you still have to set:

output_uri=True

in the

No, if you set the default_output_uri, there is no need to pass output_uri=True in the Task.init() 🙂
It is basically setting it for you, make sense ?

2 years ago
0 Hi I'M Using Clearml Datasets. How Do I Tell From The Clearml Ui Which Datasets Version Am I Using?

Or is this a feature of hyperdatasets and i just mixed them up.

Ohh yes, this is it. Hyper Datasets are part of the UI (i.e. there is a Tab with the HyperDataset query) Dataset Usage is currently listed on the Task. make sense ?

3 years ago
0 Hi All, I Am Testing The New

named as 

venv_update

 (I believe it's still in beta). Do you think enabling this parameter significantly helps to build environments faster?

This is deprecated... it was a test to use the a package that can update pip venvs, but it was never stable, we will remove it in the next version

Yes, I guess. Since pipelines are designed to be executed remotely it may be pointless to enable an 

output_uri

 parameter in the 

PipelineDecorator.componen...

3 years ago
0 Hello Clearml Ppl

Hi SmoggyGoat53
What do you mean by "feature store" ? (These days the definition is quite broad, hence my question)

2 years ago
0 Hi, I Had A Task Successfully Completed. Then I Cloned It And Enqueued It Again Without Any Changes. But The Task Ends Up With An Error. Here'S The Logs, Not Sure What Went Wrong.

SubstantialElk6
Regrading cloning the executed Task:
In the pip requirements syntax, "@" is a hint that tells pip where to find the package if it is not preinstalled.
Usually when you find the @ /tmp/folder It means the packages was preinstalled (usually pre installed in the docker).
What is the exact scenario that caused it to appear (this was always the case, before v1 as well).
For example zipp package is installed from pypi be default and not from local temp file.
Your fix b...

3 years ago
0 I Change

UnevenOstrich23

but interesting that auto-reload config does not working as I expected.

Unfortunately the trains-agent does not support auto reloading the config file yet. If you think this will be a great feature, please feel free to open a GitHub feature request issue 🙂

3 years ago
0 Hey! I Have My Custom Model, That Uses Models From Populars Frameworks Inside, Such As Lgbm, Catboost Etc. Also It Have Multiple Instances Of One Models Of One Framework.

EnviousPanda91 'connect' will log the object properties, the automagic logging is controlled in the Task.init call. Specifically Which framework produces metrics that are not logged? Your sample code manually reports some scalars/values, do you these as well?

2 years ago
0 Hi Everyone, Is It Possible To Show The Upload Progress Of Artificats? E.G. I Use

481.2130692792125 seconds

This is very slow.
It makes no sense, it cannot be network (this is basically http post, and I'm assuming both machines on the same LAN, correct ?)
My guess is the filesystem on the clearml-server... Are you having any other performance issues ?
(I'm thinking HD degradation, which could lead to a slow write speeds, which would effect the Elastic/Mongo as well)

2 years ago
0 In 1.0.3, I Am Able To Do

Could it be that clone has to be False? (I assume the reasoning is the cloning feature)

3 years ago
0 I Want To Retrieve The Logged Metrics To Be Able To Save The Best Model From My Training. This Is My Step:

Hi SteadyFox10 , this one will get all the last metric scalars
train_logger.get_last_scalar_metrics()

4 years ago
0 Hey

Hi ElegantKangaroo44 ,
This is basically the number of average number of experiments running, and the number of projects, and number of users. I think this is about it. nothing like google-analytics stuff. It is mainly aimed at giving some idea on how large is the usage. Sounds reasonable?

4 years ago
0 After Trying To Execute A Task From The Queue The Agent Fails Installing The Environment:

ERROR: torch-1.12.0+cu102-cp38-cp38-linux_x86_64.whl is not a supported wheel on this platform
TartBear70 could it be you are running on a new Mac M1/2 ?

Also quick question, any chance you can test with the latest RC?
pip3 install clearml-agent==1.3.1rc6

2 years ago
0 Hello Again! Also Wanted To Ask About

Hmm
CLEARML_CUSTOM_BUILD_OUTPUT
This might be an enterprise feature, I'm not aware of anything in the open source version

2 years ago
0 Hi! I Was Wondering If It Was Possible To Update A Finished Task. I Wanted To Add An Artifact

Hi MuddySquid7
You can only add reports (scalars plots etc.) , though not to a published Task.
If you want to add an artifact, this should work.
prev_task = Task.get_task(task_id='112233') prev_task.mark_started(force=True) prev_task.reload() prev_task.upload_artifact(..., wait_for_upload=True) prev_task.mark_stopped(force=True)

3 years ago
0 Warning:Root:Could Not Delete Task Id=6Cd7F02Be36C4361965Adf9F027Bcda5, Task Id "6Cd7F02Be36C4361965Adf9F027Bcda5" Could Not Be Found 2021-07-15 20:58:48,046 - Clearml.Task - Error - Action Failed <400/101: Tasks.Get_By_Id/V1.0 (Invalid Task Id: Id=Ff308E

Hi GreasyPenguin14
It looks like you are trying to delete a Task that does not exist
Any chance the cleanup service is misconfigured (i.e. accessing the incorrect server) ?

3 years ago
0 Hi All, I Am Trying To Execute Somewhat Custom Hpo Scheme With Clearml. I Would Want That A Single Running Python Script Will Be Able To Sample The Optimizer, Init A Task And Report The Result Multiple Times. I Didn'T Find Anything Similar In The Docs Or

the unclear part is how do I sample another point in the optimization space from the optimizer

Just so I'm clear on the issue, you want multiple machines to access the internals of the optimizer class ? or Do you just want a way to understand what is the optimizer sampling space (i.e. the parameters and options per parameter) ?

3 years ago
Show more results compactanswers