Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
CostlyOstrich36
Moderator
0 Questions, 4213 Answers
  Active since 10 January 2023
  Last activity 2 years ago

Reputation

0
0 Why It Is Unable To Acess The File If I Change It While Rerun

It looks like you're running on different machines and the file your code is looking for is not available on the other machine

2 years ago
0 Hey, What Exactly The Use Of Redis? I Can See That My Redis Is Getting Fuller And Fuller Over Time Can’T Find The Architecture Of Clearml In Docs

Hi ScrawnyLion96 ,

I think it handles some data like worker stats. It's required for the server to run. What do you mean by the redis getting fuller and fuller?

3 years ago
0 Is There A Way To Serve A Model Via The Sdk Or Rest Api? I Want To Programmatically Serve The Model After Finishing Training It Via The Pipeline. Or Is It A Bad Practice To Do So, Hence Why It'S Not Really Exposed?

Hi @<1567321739677929472:profile|StoutGorilla30> , this is a good question. I would assume that the CLI tool uses API calls under the hood. I think you can either look at the code and see what is being sent or simply do CLI commands from the code

2 years ago
0 Hi, I Am Trying To Pull Api Data From /Tasks.Get_All Endpoint

Is it your own server installation or are you using the SaaS?

3 years ago
0 Hello, I'M Using

Hi VirtuousFish83 ,

You can do it using the API directly tasks.get_all is what you're looking for
https://clear.ml/docs/latest/docs/references/api/tasks#post-tasksget_all

3 years ago
0 If Possible, I Would Like All Together Prevent The Fileserver And Write Everything To S3 (Without Needing Every User To Change Their Config)

In the UI check under the execution tab in the experiment view then scroll to the bottom - You will have a field called "OUTPUT" what is in there? Select an experiment that is giving you trouble?

3 years ago
0 Hi! We Use Clearml Self-Hosted On A K8S Cluster. Great Work Btw

Hi @<1649221394904387584:profile|RattySparrow90> , events and console logs are logged to elastic so they can be fetched. Debug samples are also events so they are saved to elastic (A link to the debug is saved in the event itself).

It is suggested to keep a dedicated elastic to ClearML

one year ago
0 Hello, Clearml.Task Has Great Ability Save Environment State Like Packages Uncommited Changes Etc. How Can I Get The Same Effect When Creating Clearml.Dataset, It Doesnt Save The Package List. When Looking At Dataset Page, I See The Dataset Upload Task Bu

Hi @<1590514584836378624:profile|AmiableSeaturtle81> , that's an interesting point. Please open a Github feature request for this. To circumvent this you can add Task.init to that code as well

one year ago
0 Hi All, I Was Trying To Reduce The Amount Of Logs Shown In The Cosnole Produced By Tqdm, So I Set

Hi @<1570220858075516928:profile|SlipperySheep79> , you need to apply the same setting on the machine that is running the agent. clearml.conf files are local and apply the settings only on the machine they're sitting on. In the Scale/Enterprise licenses there are configuration vaults that take care of this.

2 years ago
0 Hi All, I'M Trying To Create Pipeline With Decorator. When I Run The Code To Push The Pipeline To Clearml Server, It Seems The Pipeline Is Pending In "Services" Queue. But My Worker Is Listening To "Default" Queue, So The Pipeline Is Not Executed. I Read

Hi PunyWoodpecker71 ,

It's best to run the pipeline controller in the services queue because the assumption is that the controller doesn't require much compute power as opposed to steps that can be resource exhausting (depends on pipeline of course)

3 years ago
0 Just A Small Question

Could be. If it's not picking what you expect then it means something is misconfigured

11 months ago
0 Hi, Is There A Way To Share Reports With Users That Are Not Part Of The Workspace?

Hi @<1643060831954407424:profile|ScrawnyMole16> , you can export your report to PDF and share it with your colleagues 🙂

one year ago
0 Given I Want To Run A Task In A Pipeline Using A Base Task Id. One Of My Steps Just Finds The Latest Results To Use. I Want The Task To Output The Id Of The Results And The Next Step To Use It. How Would I Go About Doing This? Is There A Way To Pass Just

Hi SmugTurtle78 , I think you can set it up as follows (or something similar):
pipe.add_step( name="stage_train", parents=["stage_process"], base_task_project="examples", base_task_name="Pipeline step 3 train model", parameter_override={"General/dataset_task_id": "${stage_process.id}"}, )Note that in parameter_override I take a task id from a previous step and insert it into the configuration/parameters of the current step. Is that what you're looking for?

3 years ago
0 Hi, Is There A Way To Enqueue The Dataset

Hi EnormousCormorant39 ,

is there a way to enqueue the dataset

add

command on a worker

Can you please elaborate a bit on this? Do you want to create some sort of trigger action to add files to a dataset?

3 years ago
0 Hi! Could Someone Clarify If There Is A Way To Get The Credentials Without Going To The Ui -> "Workspace" And Click "Create New Credentials" And Use The Value Provided? Like A Api Call?

Hi @<1811208768843681792:profile|BraveGrasshopper38> , you can do anything programatically that you can do via the webUI. I suggest opening dev tools (F12) and checking what is being sent in the network tab when you create credentials.

3 months ago
0 Hi, I'M Trying To Use Pipelines In The Free Version And Encountered This: Is It Because I'M Using The Free Version Or Code Based?

Hi IrritableJellyfish76 , it looks like you need to create the services queue in the system. You can do it directly through the UI by going to Workers & Queues -> Queues -> New Queue

3 years ago
0 Hi

Hi @<1523701949617147904:profile|PricklyRaven28> , note that steps in a pipeline are special tasks with hidden system tag, I think you might want to enable that in your search

one year ago
0 Hi Everyone, I'M Having An Issue With Clearml Datasets And Would Like To Know If This Is Possible. I Have A Task That Is Executed Repeatedly. This Task May Require Data To Be Loaded And Updated From A Dataset. My Question Is: Is There A Way To Append To A

In that case you are correct. If you want to have a 'central' source of data then Datasets would be the suggested approach. Regarding your question on adding data, you would always have to create a new child version and append new data to the child.

Also maybe squashing the dataset might be relevant to you - None

4 months ago
Show more results compactanswers