Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
HelpfulDeer76
Moderator
2 Questions, 6 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

4 × Eureka!
0 Votes
3 Answers
1K Views
0 Votes 3 Answers 1K Views
Hi, Is there a way to compare the scripts of different experiment runs? This will make it easy to track changes and version control
3 years ago
0 Votes
6 Answers
1K Views
0 Votes 6 Answers 1K Views
3 years ago
0 Hi, Is There A Way To Compare The Scripts Of Different Experiment Runs? This Will Make It Easy To Track Changes And Version Control

SuccessfulKoala55 Thanks for the clarification. What if I'm not using a git repo, and the script is only stored on a remote server? Is there a way to upload a "snapshot" of it?

3 years ago
0 Hi, Can I Do A Quick Check If All The Documentation I Find On Trains Are Still Valid For Clearml? Specifically, I Am Looking At Integration Of Clearml And Kubernetes.

Hi guys,
Thanks for the previous discussion on ML-Ops with ClearML agent.
I'm still not sure how to monitor a training job on k8s (That wasn't scheduled by ClearML). My ClearML server is deployed and functional for tracking non-k8s jobs. But for a k8s job, I'm still unsuccessful.
Here is what I tried so far:
Adding my clearml.conf to the docker image tried to run clearml-init --file ~/clearml.conf

3 years ago
0 Hi Guys, Thanks For The Previous Discussion On Ml-Ops With Clearml Agent. I'M Still Not Sure How To Monitor A Training Job On K8S (That Wasn'T Scheduled By Clearml). My Clearml Server Is Deployed And Functional For Tracking Non-K8S Jobs. But For A K8S Job

AgitatedDove14 , by unsuccessful I mean that the task was being monitored on the demo ClearML server created by Allegro, rather than the one created by me and hosted on our servers. Which means (I think) that the config file is not taken in to account by clearml..

3 years ago
0 Hi Guys, Thanks For The Previous Discussion On Ml-Ops With Clearml Agent. I'M Still Not Sure How To Monitor A Training Job On K8S (That Wasn'T Scheduled By Clearml). My Clearml Server Is Deployed And Functional For Tracking Non-K8S Jobs. But For A K8S Job

SubstantialElk6 I haven't deployed the clearml-agent yet. I'm trying to run a training script on a k8s pod and have it monitored on my self-hosted ClearML server. I added the clearml.conf via the Dockerfile of the pod image, but that didn't seem to work out.

3 years ago