Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
DeterminedOwl36
Moderator
6 Questions, 11 Answers
  Active since 10 January 2023
  Last activity 8 months ago

Reputation

0

Badges 1

11 × Eureka!
0 Votes
11 Answers
592 Views
0 Votes 11 Answers 592 Views
one year ago
0 Votes
5 Answers
618 Views
0 Votes 5 Answers 618 Views
one year ago
0 Votes
3 Answers
473 Views
0 Votes 3 Answers 473 Views
8 months ago
0 Votes
0 Answers
606 Views
0 Votes 0 Answers 606 Views
one year ago
0 Votes
3 Answers
607 Views
0 Votes 3 Answers 607 Views
one year ago
0 Votes
1 Answers
610 Views
0 Votes 1 Answers 610 Views
Hello, I got this error when calling .get_local_copy() from an artifact of a task. The artifact contains nested folders and image files in there. What are po...
one year ago
0 Hi Everyone, I Have One Quick Question Regarding The Artifact Uploading. I See The Output Models From Training Are Stored Under The "Output Models" Section In Artifacts Tab, And If I Do Upload Other Artifacts Using .Upload_Artifact() Api, They Are Stored

SuccessfulKoala55 I am trying to find a way to work around it for the time being. I have 2 requirements: 1) I want to log a custom metric that is computed only at the end of every epoch (unlike other tf metrics which are updated per mini-batch). If I follow the tf doc here, will ClearML log it for me and show on "scalars" tab? https://www.tensorflow.org/tensorboard/scalars_and_keras#logging_custom_scalars 2) It's the same thing as 1) but it's a image instead. https://www.tensorflow.org/te...

one year ago
0 Hi Everyone, I Have One Quick Question Regarding The Artifact Uploading. I See The Output Models From Training Are Stored Under The "Output Models" Section In Artifacts Tab, And If I Do Upload Other Artifacts Using .Upload_Artifact() Api, They Are Stored

So, the 1st image (jupyter) is the code I used to create a task and upload the artifact. Then the task was created and I can access it normally with web GUI (like in the 2nd image). But when I click "artifacts" tab, the 404 error page appears (3rd image).

one year ago
0 Hi Everyone, I Have One Quick Question Regarding The Artifact Uploading. I See The Output Models From Training Are Stored Under The "Output Models" Section In Artifacts Tab, And If I Do Upload Other Artifacts Using .Upload_Artifact() Api, They Are Stored

SuccessfulKoala55 I see. Hope it will be added as a new feature in the future version. For me, it's quite important for the organization purpose, especially if the task outputs many artifacts.

one year ago
0 Hi, I Have Run A Task And Called .Upload_Artifact() To Upload The Pandas Dataframe To Clearml, But It Seemed Like Even After .Upload_Artifact() Has Done Its Job, The Task'S Terminal Didn'T Terminate. The Uploaded Artifacts Are Shown On The Gui, But When I

SuccessfulKoala55 I don't think so cuz the files are just small dataframes and the thing is I tried saving those output files on my local machine then created a new task, uploaded them with a new code on .ipynb, and it took less than a minute and everything works fine. (the frozen script is .py)

one year ago
0 Hi, I Have Run A Task And Called .Upload_Artifact() To Upload The Pandas Dataframe To Clearml, But It Seemed Like Even After .Upload_Artifact() Has Done Its Job, The Task'S Terminal Didn'T Terminate. The Uploaded Artifacts Are Shown On The Gui, But When I

It printed "ClearML couldn't detect iterations..." something like this, but the process never ends, just freezes here. I mean I can't run any further commands with this terminal cuz it's running this task, except ctrl+c.

one year ago
0 When I Run Multiple Tasks Simultaneously In The Same Projects Locally, If One Task Finishes Before Others, The Other Tasks Are Gonna Be Automatically Terminated. Is This The Expected Behavior? If Not, What'Re The Possible Causes, And How To Fix Them? Than

I have ClearML set up locally. The way to run the task is straightforward: I create the task with Task.init() at the very top of the file, do things (inference, save outputs, etc.), upload outputs with task.upload_artifact(), and then end the script with task.mark_completed().

8 months ago