Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AntsyElk37
Moderator
2 Questions, 38 Answers
  Active since 10 January 2023
  Last activity 5 months ago

Reputation

0

Badges 1

30 × Eureka!
0 Votes
6 Answers
1K Views
0 Votes 6 Answers 1K Views
hello! i had trouble running clearml-agent on k8s. i fixed it by modifying the helm chart to allow specifying runtimeClassName (which is needed when using nv...
6 months ago
0 Votes
31 Answers
127K Views
0 Votes 31 Answers 127K Views
3 years ago
0 Hello! I'M Running Clearml-Server On Kubernetes, And It Seems My Models Are Not Really Saved. I See That Doing Task.Init(Output_Uri=True) Should Send Models To Fileserver. The Models Are Visible In The Ui But The Download Button Is Greyed Out And When I D

hello, i'm still not able to save clearml models. They are generated and registered okay, but they are not on the fileserver. i now have Task.init(output_uri=True) and i also have --skip-task-init in clearml commandline so that it doesn't overwrite the task.init call

3 years ago
0 Hello! I'M Running Clearml-Server On Kubernetes, And It Seems My Models Are Not Really Saved. I See That Doing Task.Init(Output_Uri=True) Should Send Models To Fileserver. The Models Are Visible In The Ui But The Download Button Is Greyed Out And When I D

this is the output of the training. it doens't try to upload (note that this is my second try so it already found a model with that name, but on my first try it didn't work either)

3 years ago
0 Hello! I Had Trouble Running Clearml-Agent On K8S. I Fixed It By Modifying The Helm Chart To Allow Specifying Runtimeclassname (Which Is Needed When Using Nvidia Gpu Operator). I Did This,

i'm still trying to understand why it was needed in our case. i have a nvidia gpu operator with mostly the default values installed on our on prem cluster. i found there is an option CONTAINERD_SET_AS_DEFAULT in the operator, which, when enabled, puts the runtime for all pods. we didn't enable that option, maybe if we had enabled it would have worked.

6 months ago
0 Hi All, Does Anyone Know Why I Can'T See The Worker Plots Nor The Training Plots In Clearml Ui (K8S Deployment)? The Error Is:

i think i found it. we had to replace elasticsearch after install of clearml. then i guess clearml migrations iddn't rerun

6 months ago
0 Hello! I Had Trouble Running Clearml-Agent On K8S. I Fixed It By Modifying The Helm Chart To Allow Specifying Runtimeclassname (Which Is Needed When Using Nvidia Gpu Operator). I Did This,

this seems to be confirmed by this documentation None If you have not changed the default runtime on your GPU nodes, you must explicitly request the NVIDIA runtime by setting runtimeClassName: nvidia in the Pod spec:

6 months ago
0 Hi All, Does Anyone Know Why I Can'T See The Worker Plots Nor The Training Plots In Clearml Ui (K8S Deployment)? The Error Is:

i did this as a workaround:

curl -XPUT " None " -H 'Content-Type: application/json' -d'
{
"properties": {
"metric": { "type": "text", "fielddata": true },
"variant": { "type": "text", "fielddata": true }
}
}'

but this workaround should not be needed ,right ? is this a compat issue ? or was my elasticsearch not properly initialized ?

6 months ago
Show more results compactanswers