Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
SmugDolphin23
Moderator
0 Questions, 418 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0
0 Hello All, I Want To Clarify Something. In The

No need, I think I will review it on Monday

10 months ago
0 Hello All, I Want To Clarify Something. In The

I think we should just have a new parameter

10 months ago
10 months ago
0 Hello All, I Want To Clarify Something. In The
With that said, can I run another thing by you related to this. What do you think about a PR that adds the functionality I originally assumed schedule_function was for? By this I mean: adding a new parameter (this wouldn't change anything about schedule_function or how .add_task() currently behaves) that also takes a function but the function expects to get a task_id when called. This function is run at runtime (when the task scheduler would normally execute the scheduled task) and use ...
10 months ago
0 Hello All, I Want To Clarify Something. In The

Hi @<1545216070686609408:profile|EnthusiasticCow4> ! That's correct. The job function will run in a separate thread on the machine you are running the scheduler from. That's it. You can create tasks from functions tho using backend_interface.task.populate.CreateFromFunction.create_task_from_function

10 months ago
10 months ago
one year ago
0 I'M A Bit Confused. It Seems Like Something Has Changed With How Clearml Handles Recording Datasets In Tasks. It Used To Be The Case That When I Would Create A Dataset Under A Task, Clearml Would Record The Id Of The Dataset In The Hyperparameters/Datase

Hi @<1545216070686609408:profile|EnthusiasticCow4> ! Note that the Datasets section is created only if you get the dataset with an alias? are you sure that number_of_datasets_on_remote != 0 ?
If so, can you provide a short snippet that would help us reproduce? The code you posted looks fine to me, not sure what the problem could be.

one year ago
0 Hello All, I’M An Ml Engineer Looking To Transition Our Company To A New Mlops System. Many Of Our Projects Are Currently Built Around Hydra And I’M Attempting To See What I Would Need To Do To Integrate Clearml Into Our Workflow. I’M Fully Aware That You

Hi @<1545216070686609408:profile|EnthusiasticCow4> !

So you can inject new command line args that hydra will recognize.

This is true.

However, if you enable _allow_omegaconf_edit_: True, I think ClearML will "inject" the OmegaConf saved under the configuration object of the prior run, overwriting the overrides

This is also true.

one year ago
0 If I Ran A Hyperparemeter Sweep And I Wanted To Create A Graph Where The X-Axis Was One Of The Hyperparameters, Let'S Say The Momentum Term Of The Optimizer, And I Wanted To Plot That Vs The Min-Loss Over All Epochs, Is There A Good Way To Do This With Cl

@<1545216070686609408:profile|EnthusiasticCow4> yes, that's true. I would aggregate the tasks by tags (the steps will be tagged with opt: ID ), None then get the metrics to get the losses None , and look into the tasks config to get the term you wanted to optimize [None](https://clear.ml/docs/latest/docs/references/sdk/task/#get_last...

one year ago
0 Hello :wave: ! I am trying to leverage the `retry_on_failure` with a `PipelineController` (using functions aka `add_function_step` ) to update my step parameters for the next retry. My understanding is that the step (init with `function_kwargs`) use a pic

Hi @<1558986821491232768:profile|FunnyAlligator17> ! There are a few things you should consider:

  • Artifacts are not necessarily pickles. The objects you upload as artifacts can be serialized in a variety of ways. Our artifacts manager handles both serialization and deserialization. Because of this, you should not pickle the objects yourself, but specify artifact_object as being the object itself.
  • To get the deserialized artifact, just call task.artifacts[name].get() (not get_local...
one year ago
0 Hey, I Have Pipeline From Code, But Have Problem With Caching. Actually Clearml Didn'T Cache Already Executed Steps (Tried To Re-Run Pipeline From Web-Ui). Did I Miss Something?

@<1702492411105644544:profile|YummyGrasshopper29> you could try adding the directory you are starting the pipeline with to the python path. then you would run the pipeline like this:

 PYTHONPATH="${PYTHONPATH}:/path/to/pipeline_dir" python my_pipeline.py
5 months ago
2 years ago
0 Hi Community, I Might Have A Misunderstanding Of The Use Of Task.Connect Method. It Seems Like The Object I Connect Is Immutable, While It Should Be Mutable.

Hi @<1523703961872240640:profile|CrookedWalrus33> ! The way connect works by default is:
While running locally, all the values (and value changes) of a connected object are sent to the backend.
While running remotely (in your case here), all the values sent in the local run are fetched from the backend and the connected dictionary is populated with these values. The values are readonly, chaning them will not have any effect.
To avoid this behaviour, you could use the `ignore_remote_override...

10 months ago
0 Hi, I Am Trying To Use

Hi DrabOwl94 Looks like this is a bug. Strange no one found it until now. Anyway, you can just add a --params-override at the end of the command line and it should work (and --max-iteration-per-job <YOUR_INT> and --total-max-job <YOUR_INT> as Optuna requires this). We will fix this one in the next patch.
Also, could you please open a Github issue? It should contain your command line and this error.
Thank you

2 years ago
0 I Have An Issue Using Clearml Integrations With Kerastuner. I Have Followed The Guide In

Hi @<1581454875005292544:profile|SuccessfulOtter28> ! The logger is likely outdated. Can you please open a Github issue about it?

7 months ago
0 Hey All, Hope You'Re Having A Great Day, Having An Unexpected Behavior With A Training Task Of A Yolov5 Model On My Pipeline, I Specified A Task In My Training Component Like This:

FierceHamster54
initing the task before the execution of the file like in my snippet is not sufficient ?It is not because os.system spawns a whole different process then the one you initialized your task in, so no patching is done on the framework you are using. Child processes need to call Task.init because of this, unless they were forked, in which case the patching is already done.
` But the training.py has already a CLearML task created under the hood since its integratio...

2 years ago
0 Hi! Pipelinecontroller Has Method:

Hi @<1523701240951738368:profile|RoundMosquito25> ! Yes, you should be able to do that

one year ago
0 Https://Clearml.Slack.Com/Archives/Ctk20V944/P1713357955958089

@<1523701949617147904:profile|PricklyRaven28> Can you please try clearml==1.16.2rc0 ? We have released a fix that will hopefully solve your problem

7 months ago
2 years ago
0 If I Ran A Hyperparemeter Sweep And I Wanted To Create A Graph Where The X-Axis Was One Of The Hyperparameters, Let'S Say The Momentum Term Of The Optimizer, And I Wanted To Plot That Vs The Min-Loss Over All Epochs, Is There A Good Way To Do This With Cl

Hi @<1545216070686609408:profile|EnthusiasticCow4> ! Can't you just get the values of the hyperparameters and the losses, then plot them with something like mathplotlib then just report the plot to ClearML?

one year ago
0 Hello, Im Having Huge Performance Issues On Large Clearml Datasets How Can I Link To Parent Dataset Without Parent Dataset Files. I Want To Create A Smaller Subset Of Parent Dataset, Like 5% Of It. To Achieve This, I Have To Call Remove_Files() To 60K It

Hi @<1590514584836378624:profile|AmiableSeaturtle81> ! Looks like remove_files doesn't support lists indeed. It does support paths with wildcards tho, if that helps.
I would remove all the files to the dataset and add only the ones you need back as a workaround for now, or just create a new dataset

7 months ago
Show more results compactanswers