Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
SmugDolphin23
Moderator
0 Questions, 433 Answers
  Active since 10 January 2023
  Last activity 2 years ago

Reputation

0
0 Hi. I Have A Job That Processes Images And Creates ~5 Gb Of Processed Image Files (Lots Of Small Ones). At The End - It Creates A

PanickyMoth78 there is no env var for sdk.google.storage.pool_connections/pool_maxsize . We will likely add these env vars in a future release.
Yes, setting max_workers to 1 would not make a difference. The docs look a bit off, but it is specified that 1: if the upload destination is a cloud provider ('s3', 'gs', 'azure') .
I'm thinking now that the memory issue might also be cause because of the fact that we prepare the zips in the background. Maybe a higher max_workers wou...

2 years ago
0 Why Is Async_Delete Not Working?

you might want to prefix both the host in the configuration file and the uri in Task.init / StorageHelper.get with s3. if the script above works if you do that

one year ago
0 Why Is Async_Delete Not Working?

just append it to None : None in Task.init

one year ago
0 I’M Trying To Understand The Execution Flow Of Pipelines When Translating From Local To Remote Execution. I’Ve Defined A Pipeline Using The

If the task is running remotely and the parameters are populated, then the local run parameters will not be used, instead the parameters that are already on the task will be used. This is because we want to allow users to change these parameters in the UI if they want to - so the paramters that are in the code are ignored in the favor of the ones in the UI

one year ago
0 Hi, Bug Report. I Was Trying To Upload Data To S3 Via Clearml.Dataset Interface

Hi NonchalantGiraffe17 ! Thanks for reporting this. It would be easier for us to check if there is something wrong with ClearML if we knew the number and sizes of the files you are trying to upload (content is not relevant). Could you maybe provide those?

3 years ago
0 Hello, I Am Testing My Hidra/Omegaconf With Clearml And I Have A General Question. Why Is It Necessary To Indicate That I Want To Edit The Configuration (Setting

Hi @<1603198134261911552:profile|ColossalReindeer77> ! The usual workflow is that you modify the fields in your remoter run in either the Hyperparameters section or the configuration section, but not usually both (as in Hydra's case). When using CLI tools, people mostly modify the Hyperparameters section so we chose to set the allow_omegaconf_edit to False by default for parity.

2 years ago
0 I Am Using Clearml Pro And Pretty Regularly I Will Restart An Experiment And Nothing Will Get Logged To Clearml. It Shows The Experiment Running (For Days) And It'S Running Fine On The Pc But No Scalers Or Debug Samples Are Shown. How Do We Troubleshoot T

@<1719524641879363584:profile|ThankfulClams64> you could try using the compare function in the UI to compare the experiments on the machine the scalars are not reported properly and the experiments on a machine that runs the experiments properly. I suggest then replicating the environment exactly on the problematic machine. None

one year ago
0 Hi All, I Am Trying To Get All Pipeline Tasks Or Task Ids From A Specific Project. The Project In The Details Of One Of The Pipeline Tasks Is Defined As

Hi @<1679661969365274624:profile|UnevenSquirrel80> ! Pipeline projects are hidden. You can try to pass task_filter={"search_hidden": True, "_allow_extra_fields_": True} to the query_tasks function to fetch the tasks from hidden projects

one year ago
0 Hi Team, I Am Trying To Run A Pipeline Remotely Using Clearml Pipeline And I’M Encountering Some Issues. Could Anyone Please Assist Me In Resolving Them?

@<1626028578648887296:profile|FreshFly37> I see that create_dataset doesn't have a repo set. Can you try setting it manually via the repo repo_branch repo_commit arguments in the add_function_step method?

one year ago
0 I Am Currently Training A Yolo Model Using The Yolov5 Framework Within A Container. I Am Using The --Project And --Name Flags During The Training Process, But Unfortunately, The Training Results Are Not Being Sent To The Server. Instead, They Are Being Fo

Hi @<1639074542859063296:profile|StunningSwallow12> !
This happens because the output_uri in Task.init is likely not set.
You could either set the env var CLEARML_DEFAULT_OUTPUT_URI to the file server you want the model to be uploaded to before running train.py or set sdk.development.default_upload_uri: true (or to the file server you want the model to be uploaded to) in your clearml.conf .
Also, you could call Task.init(output_uri=True) in your train.py scri...

one year ago
0 Hi, I'M Running

Hi OutrageousSheep60 ! Regarding your questions:
No it's not. We will have a RC that fixes that ASAP, hopefully by tomorrow You can use add_external_files which you already do. If you wish to upload local files to the bucket, you can specify the output_url of the dataset to point the bucket you wish to upload the data to. See the parameter here: https://clear.ml/docs/latest/docs/references/sdk/dataset/#upload . Note that you CAN mix external_files and regular files. We don't hav...

2 years ago
0 When I Run An Experiment (Self Hosted), I Only See Scalars For Gpu And System Performance. How Do I See Additional Scalars? I Have

Hi BoredHedgehog47 ! We tried to reproduce this, but failed. What we tried is running the attached main.py which Popen s sub.py .
Can you please run main.py as well and tell us if you still encounter the bug? If not, is there anything else you can think of that could trigger this bug besides creating a subprocess?
Thank you!

3 years ago
0 Hi, I Am Running A Script From A Git Repository. In The Repository There Is A Package That I Wrote And I Would Like The Script That I Am Running To Be Able To Import It, Thus I Need To Add The Package Path To Python Path. Repo Structure:

Hi @<1594863230964994048:profile|DangerousBee35> ! This looks like an ok solution, but I would make the package pip-installable and push it to another repo, then add that repo to a requirements file such that the agent can install it. Other than that, I can’t really think of another easy way to use your package

2 years ago
0 Hey All. Wanting To Log

Hi @<1674226153906245632:profile|PreciousCoral74> !

Sadly, Logger.report_matplotlib_figure(…) doesn't seem to log plots. Only the automatic integration appears to behave.

What do you mean by that? report_matplotlib_figure should work. See this example on how to use it: None .
If it still doesn't work for you, could you please share a code snippet that could help us track down...

one year ago
0 Hi, Working With Clearml 1.6.4 What Is The Correct Way To List All The

Hi OutrageousSheep60 . The list_datasets function is currently broken and will be fixed next release

3 years ago
0 Hi All, I Have A Question Regarding

@<1634001100262608896:profile|LazyAlligator31> it looks like the args get passed to a python thread. so the should be specified the same way as you would pass them to the args argument in a thread (so a tuple of positional arguments): func_args=("something", "else") . It looks like passing kwargs is not directly supported, but you could build a partial :

from functools import partial
scheduler.add_task(schedule_function=partial(clone_enqueue, arg_1="something", arg_2="else")...
one year ago
0 I Have An Environment Error When Running Hpo:

Oh I see, glad you found the problem!

one year ago
0 Hi All

Hi @<1546303293918023680:profile|MiniatureRobin9> ! I think the UI is not aware of tags. Anyway, the repository will likely get checked out to your desired tag. Can you please tell us if that's the case?

2 years ago
0 Seems Like Clearml Tasks In Offline Mode Cannot Be Properly Closed, We Get

That is a clear bug to me. Can you please open a GH issue?

2 years ago
0 Hi Everyone

Hi @<1546303293918023680:profile|MiniatureRobin9> ! When it comes to pipeline from functions/other tasks, this is not really supported. You could however cut the execution short when your step is being ran by evaluating the return values from other steps.

Note that you should however be able to skip steps if you are using pipeline from decorators

2 years ago
0 I Configured S3 Storage In My Clearml.Conf File On A Worker Machine. Then I Run Experiment Which Produced A Small Artifact And It Doesn'T Appear In My Cloud Storage. What Am I Doing Wrong? How To Make Artifacts Appear On My S3 Storage? Below Is A Sample O

@<1526734383564722176:profile|BoredBat47> How would you connect with boto3 ? ClearML uses boto3 as well, what it basically does is getting the key/secret/region from the conf file. After that it opens a Session with the credentials. Have you tried deleting the region altogether from the conf file?

2 years ago
0 Hi! I'M Running Launch_Multi_Mode With Pytorch-Lightning

you could also try using gloo as the backend (it uses CPU) just to check that the subprocesses spawn properly

one year ago
Show more results compactanswers