Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
SmugDolphin23
Moderator
0 Questions, 433 Answers
  Active since 10 January 2023
  Last activity 2 years ago

Reputation

0
0 Hi Clearmlers, I'M Trying To Create A Dataset With Tagged Batches Of Data. I Firstly Create An Empty Dataset With Dataset_Name = 'Name_Dataset', And Then Create A Another Tagged Dataset With The First Batch And With Parent_Datasets=['Name_Dataset']. It'S

Hi @<1668427950573228032:profile|TeenyShells80> , the parent_datasets should be a list of dataset IDs or clearml.Dataset objects, not dataset names. Maybe that is the issue

one year ago
0 Hi All We Have Set Nginx In Front Of Clearml And Signed With Our Own Self-Signed Certs I'M Trying To Modify The

have you tried copying the certificate to /usr/local/share/ca-certificates/ ?

one year ago
0 I'Ve Noticed A Change From Clearml

Hi @<1545216070686609408:profile|EnthusiasticCow4> ! This is actually very weird. Does your pipeline fail when running the first step? What if you run the pipeline via "raw" python (i.e. by doing python3 your_script.py )?

one year ago
0 Hi Everyone, I Am Trying Out Pipeline With Functions. I Have A Requirements.Txt In My Folder Root. When I Run My Pipeline, The Pipeline Started Successfully But When It Starts Executing The First Task, It Fails. Error Saying Clearml Not Found. Logs Show T

@<1523701304709353472:profile|OddShrimp85> The way to do it is using the packages argument. Maybe clearml couldn't file the requirements file. What does os.path.exists('./requirements.txt') return?

2 years ago
0 Why Is Async_Delete Not Working?

just append it to None : None in Task.init

one year ago
0 Hello! When I Squash Multiple Datasets (E.G.

Hi SmallGiraffe94 ! Dataset.squash doesn't set as parents the ids you specify in dataset_ids . Also, notice that the current behaviour of squash is pulling the files from all the datasetes from a temp folder and re-uploading them. How about creating a new dataset with id1, id2, id3 as parents Dataset.create(..., parent_datasets=[id1, id2, id3]) instead? Would this fit your usecase?

3 years ago
0 Hi, I'M Running

OutrageousSheep60 1.8.4rc1 is out. Can you please try it? pip install -U clearml==1.8.4rc1

2 years ago
0 Hello Everyone! I Cant Connect Clearml With Yandex Storage S3. I Have An Error With Keys And Permissions (See The Screenshots), But I Can Upload Model Weights On Yandex Storage S3 Without Clearml. Maybe I Have Problems With My Config? Could You Help Me, P

Hi @<1675675705284759552:profile|NonsensicalAnt77> ! How are you uploading the model weights without using the SDK? Can you please share a code snippet (might be useful in finding why your config doesn't work). Also, what is your clearml version?

one year ago
0 Hello Everyone, While Calling Get_Local_Copy Of The Dataset From The Fileserver, I Get The Path To The Local Copy, But The Files Are Not Downloaded And The Folder Is Empty. Tell Me What Could Be The Problem. I Don'T Get Any Additional Errors Or Warnings.

Hi @<1524560082761682944:profile|MammothParrot39> ! A few thoughts:
You likely know this, but the files may be downloaded to something like /home/user/.clearml/cache/storage_manager/datasets/ds_e0833955ded140a69b4c9c9d8e84986c . .clearml may be hidden and if you are using an explorer you are not able to see the directory.

If that is not the issue: are you able to download some other datasets, such as our example one: UrbanSounds example ? I'm wondering if the problem only happens fo...

2 years ago
0 Hi Everyone, I Have A Question About Using

Hi @<1643060801088524288:profile|HarebrainedOstrich43> ! The rc is now out and installable via pip install clearml==1.14.1rc0

one year ago
0 Hi. I Have A Job That Processes Images And Creates ~5 Gb Of Processed Image Files (Lots Of Small Ones). At The End - It Creates A

PanickyMoth78 there is no env var for sdk.google.storage.pool_connections/pool_maxsize . We will likely add these env vars in a future release.
Yes, setting max_workers to 1 would not make a difference. The docs look a bit off, but it is specified that 1: if the upload destination is a cloud provider ('s3', 'gs', 'azure') .
I'm thinking now that the memory issue might also be cause because of the fact that we prepare the zips in the background. Maybe a higher max_workers wou...

2 years ago
0 Hi All

Hi @<1523701523954012160:profile|ShallowCormorant89> ! This is not really supported, but you could use continue_on_fail to make sure you get to your last step: None

2 years ago
0 Hello Everyone! I Am Using Clearml To Manage My Model Training For A Thesis Project. I Am Currently On The Stage Of Hyper-Parameter Tuning My Yolov5U Model And Am Testing Out The

Hi @<1691620883078057984:profile|ConfusedSeaanemone5> ! Those are the only 3 charts that the HPO constructs and reports. You could construct other charts/plots yourself and report them when a job completes using the job_completed_callback parameter.

one year ago
0 Hi! I'M Running Launch_Multi_Mode With Pytorch-Lightning

you could also try using gloo as the backend (it uses CPU) just to check that the subprocesses spawn properly

one year ago
0 I’M Trying To Understand The Execution Flow Of Pipelines When Translating From Local To Remote Execution. I’Ve Defined A Pipeline Using The

If the task is running remotely and the parameters are populated, then the local run parameters will not be used, instead the parameters that are already on the task will be used. This is because we want to allow users to change these parameters in the UI if they want to - so the paramters that are in the code are ignored in the favor of the ones in the UI

one year ago
0 When I Run An Experiment (Self Hosted), I Only See Scalars For Gpu And System Performance. How Do I See Additional Scalars? I Have

Hi BoredHedgehog47 ! We tried to reproduce this, but failed. What we tried is running the attached main.py which Popen s sub.py .
Can you please run main.py as well and tell us if you still encounter the bug? If not, is there anything else you can think of that could trigger this bug besides creating a subprocess?
Thank you!

3 years ago
0 I Get These Warnings Whenever I Run Pipelines And I Have No Idea What It Means Or Where It Comes From:

Hi @<1694157594333024256:profile|DisturbedParrot38> ! We weren't able to reproduce, but you could find the source of the warning by appending the following code at the top of your script:

import traceback
import warnings
import sys

def warn_with_traceback(message, category, filename, lineno, file=None, line=None):
    log = file if hasattr(file,'write') else sys.stderr
    traceback.print_stack(file=log)
    log.write(warnings.formatwarning(message, category, filename, lineno, line))
...
one year ago
0 Hi! I'M Running Launch_Multi_Mode With Pytorch-Lightning

does it work running this without clearml? @<1578555761724755968:profile|GrievingKoala83>

one year ago
Show more results compactanswers