Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
SmugDolphin23
Moderator
0 Questions, 415 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0
0 Hi, Trying To Report A Matplotlib Figure With

@<1566596968673710080:profile|QuaintRobin7> not for now. Could you please open a GH issue about it? Maybe we can fit this in a future patch.

one year ago
0 Hi! I'M Running Launch_Multi_Mode With Pytorch-Lightning

@<1578555761724755968:profile|GrievingKoala83> what error are you getting when using gloo? Is it the same one?

5 months ago
0 Is There Any Way To Get Value Of Step Parameter/Function Kwarg? This Is From Documentation, But Didn'T Manage To Get Value.

Hi @<1702492411105644544:profile|YummyGrasshopper29> ! Parameters can belong to different sections. You should append it before some_parameter . You likely want ${step2.parameters.kwargs/some_parameter}

3 months ago
0 Hi. I Have A Job That Processes Images And Creates ~5 Gb Of Processed Image Files (Lots Of Small Ones). At The End - It Creates A

PanickyMoth78 Something is definitely wrong here. The fix doesn't seem to be trivial as well... we will prioritize this for the next version

one year ago
0 Https://Clearml.Slack.Com/Archives/Ctk20V944/P1713357955958089

Hi @<1523701949617147904:profile|PricklyRaven28> ! Thank you for the example. We managed to reproduce. We will investigate further to figure out the issue

6 months ago
0 Hi, I Am Observing A Strange Behaviour When Loading A Dataset’S Local Copy.

Hi @<1695969549783928832:profile|ObedientTurkey46> ! You could try increasing sdk.storage.cache.default_cache_manager_size to a very large number

2 months ago
0 Hi, We Have Recently Upgraded To

OutrageousSheep60 that is correct, each dataset is in a different subproject. That is why bug 2. happens as well

2 years ago
8 months ago
0 Hi! I'M Running Launch_Multi_Mode With Pytorch-Lightning

you could also try using gloo as the backend (it uses CPU) just to check that the subprocesses spawn properly

5 months ago
0 Hello, I Am Testing My Hidra/Omegaconf With Clearml And I Have A General Question. Why Is It Necessary To Indicate That I Want To Edit The Configuration (Setting

Hi @<1603198134261911552:profile|ColossalReindeer77> ! The usual workflow is that you modify the fields in your remoter run in either the Hyperparameters section or the configuration section, but not usually both (as in Hydra's case). When using CLI tools, people mostly modify the Hyperparameters section so we chose to set the allow_omegaconf_edit to False by default for parity.

one year ago
0 Hi, I’M Trying To Upload Output Model Files (Like .Pth) To Clearml Server. Assume My

@<1523721697604145152:profile|YummyWhale40> are you able to manually save models from SageMaker using OutputModel ? None

8 months ago
0 Hi. I Have A Job That Processes Images And Creates ~5 Gb Of Processed Image Files (Lots Of Small Ones). At The End - It Creates A

Hi PanickyMoth78 ! This will likely not make it into 1.9.0 (this will be the next version we release, most likely before Christmas). We will try to get the fix out in 1.9.1

one year ago
0 Hi, I Am Switching From Wandb To Clearml In My Pytorch Ddp Training Script. With Wandb I Used To Have Worker Nr 1 Handle Logging To Wandb And Initiating The Connection. If I Simply Exchange Wandb Calls With Clearml Calls, Worker Nr 1, Which Handles The Co

That makes sense. You should generally have only 1 task (initialized in the master process). The other subprocesses will inherit this task which should speed up the process

8 months ago
0 Hi, I Have Noticed That Dataset Has Started Reporting My Dataset Head As A Txt File In "Debug Samples -> Metric: Tables". Can I Disable It? Thanks!

Hi HandsomeGiraffe70 ! You could try setting dataset.preview.tabular.table_count to 0 in your clearml.conf file

2 years ago
0 Hi, Bug Report. I Was Trying To Upload Data To S3 Via Clearml.Dataset Interface

Hi NonchalantGiraffe17 ! Thanks for reporting this. It would be easier for us to check if there is something wrong with ClearML if we knew the number and sizes of the files you are trying to upload (content is not relevant). Could you maybe provide those?

2 years ago
0 Hi All, I'Ve Been Experimenting Around With Automating The Data Sync. This Is Related To This Thread:

@<1545216070686609408:profile|EnthusiasticCow4>
This:

            parent = self.clearml_dataset = Dataset.get(
                dataset_name="[LTV] Dataset",
                dataset_project="[LTV] Lifetime Value Model",
            )
            # generate the local dataset
            dataset = Dataset.create(
                dataset_name=f"[LTV] Dataset",
                parent_datasets=[parent],
                dataset_project="[LTV] Lifetime Value Model",
            )

should l...

one year ago
0 Hi, Trying To Report A Matplotlib Figure With

Hi @<1566596968673710080:profile|QuaintRobin7> ! Sometimes, ClearML is not capable of transforming matplotlib plots to plotly , so we report the plot as an image to Debug Samples. Looks like report_interactive=True makes the plot unparsable

one year ago
0 Hi Everyone, I'M Using Torch.Distributed For Training On 2 Gpus. It Works, But Each Gpu Creates A New (Duplicated) Task, And I Prefer To Have Only One Clearml Experiment Running. I Looked Here

Hi @<1578918167965601792:profile|DistinctBeetle43> ! This is currently not possible. A different task will be created for each instance

one year ago
0 Since Clearml 1.6.3, A Dataset Attached To A Task Now Renames That Task By Adding A

UnevenDolphin73 Yes it makes sense. At the moment, this is not possible. When using use_current_task=True the task gets attached to the dataset and moved under dataset_project/.datasets/dataset_name . Maybe we could make the task not disappear from its original project in the near future.

2 years ago
0 Hi Everyone, I'M Currently Trying To Add A Csv-File That Is Located In An S3-Bucket To An Existing Clearml Dataset Using The Following Code:

You could consider downgrading to something like 1.7.1 in the meantime, it should work with that version

one year ago
0 Hello! I Can'T Seem To Be Able To Stop Clearml From Automatically Logging Model Files (Optimizer, Scheduler). It'S A Useful Feature But I'D Like To Have Some Control Over It, So That The Disk Space In My File Storage Isn'T Overused. I'M Using

Hi @<1523701345993887744:profile|SillySealion58> ! We allow finer grained control over model uploads. Please refer to this GH thread for an example on how to achieve that: None

4 months ago
0 Hi! I'M Running Launch_Multi_Mode With Pytorch-Lightning

@<1578555761724755968:profile|GrievingKoala83> Looks like something inside NCCL now fails which doesn't allow rank0 to start. are you running this inside a docker container? what is the output of nvidia-smi inside of this container?

5 months ago
0 Hi! I'M Running Launch_Multi_Mode With Pytorch-Lightning

can you send the full logs of rank0 and rank1 tasks?

5 months ago
0 Hi, I Know That If You Have A Child Dataset Of A Dataset With Zips, And If The Parent Has Been Cached Locally, The Files In The Zips Would Be Symlinked To The Parent'S In

Hi @<1709015393701466112:profile|ScatteredPeacock14> ! I think you are right. We are going to look into fixing this

4 months ago
0 Are There Any Resources On How I Can Implement Hyperparameter Optimisation Using Ray Tune On Clearml?

Hi @<1581454875005292544:profile|SuccessfulOtter28> ! You could take a look at how the HPO was built using optuna: None .
Basically: you should create a new class which inherits from SearchStrategy . This class should convert clearml hyper_parameters to some parameters the Ray Tune understands, then create a Tuner and run the Ray Tune hyper paramter optimization.
The function Tuner will optim...

6 months ago
0 How Does One

Hi @<1654294828365647872:profile|GorgeousShrimp11> ! add_tags is an instance method, so you will need the controller instance to call it. To get the controller instance, you can do PipelineDecorator.get_current_pipeline() then call add_tags on the returned value. So: PipelineDecorator.get_current_pipeline().add_tags(tags=["tag1", "tag2"])

7 months ago
0 Hi Guys, Are There Any Ways To Suppress Clearml’S Console Messages? I’M Not Interested In Messages Like This, Especially About Uploading Models. I Tried Some Stuff With Loggers ” Logging.Basicconfig(Format=‘%(Name)S - %(Levelname)S - %(Message)S’, Level=

Hi @<1715900760333488128:profile|ScaryShrimp33> ! You can set the log level by setting the CLEARML_LOG_LEVEL env var before importing clearml. For example:

import os
os.environ["CLEARML_LOG_LEVEL"] = "ERROR"  # or str(logging.CRITICAL/whatever level) also works 

Note that the ClearML Monitor warning is most likely logged to stdout, in which case this message can't really be suppressed, but model upload related message should be

4 months ago
Show more results compactanswers