Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
GreasyPenguin66
Moderator
3 Questions, 17 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

17 × Eureka!
0 Votes
8 Answers
999 Views
0 Votes 8 Answers 999 Views
3 years ago
0 Votes
14 Answers
808 Views
0 Votes 14 Answers 808 Views
3 years ago
0 Votes
15 Answers
1K Views
0 Votes 15 Answers 1K Views
3 years ago
0 Hi! I Deployed Clearml Server Along With Jupyterhub On Azure K8S (Aks). The Way It Works Is That Every User Is Assigned A New Pod That Is Spawned With A Docker Image Of A Choice (One Of Them With Clearml Sdk Installed). I Managed To Configure Most Of The

I'm sorry i was wrong. Niether of the commands give positive response. I actually get 404 page ... Sry I assumed I got a lot of data so it meant it was ok. But now I read into it

3 years ago
0 Hi! I Deployed Clearml Server Along With Jupyterhub On Azure K8S (Aks). The Way It Works Is That Every User Is Assigned A New Pod That Is Spawned With A Docker Image Of A Choice (One Of Them With Clearml Sdk Installed). I Managed To Configure Most Of The

Hi AgitatedDove14 . I'm just writing to explain what was the problem. Basically our setup - jupyterhub on k8s with kubespawner that was spawning a pod for each single user notebook, uses docker images that are based on jupyter/docker-stacks.

The problem was that the token for jupyterhub api was not propagated to the spawned pod so whenever clearml was trying to access jupyter/user/api/sessions endpoint it would be redirected for authorization to jupyterhub api and then fail due to the lack ...

3 years ago
0 Hey! I Stumbled Upon Some Errors With My Workers Monitoring. I Checked Logs In My K8S Pods For Apiserver And Elasticsearch And It Seems The Problem Is There. These Are The Logs: Apiserver Logs [2021-04-23 06:19:50,209] [9] [Error] [Trains.Service_Repo] Re

Yes they are. With mongo I had a problem connected with azurefiles and mongo who did not approve to mount azurefiles under /data/db as it could not initialize. The solution for that was to mount the azurefiles under different path and then specify command for mongo with path to the data so that it could initialize properly. However when I deleted a kubernetes cluster, created a new one and I redeployed clearml I had issues coming not from mongo anymore but from apiserver that was failing with...

3 years ago
0 Hi! I Deployed Clearml Server Along With Jupyterhub On Azure K8S (Aks). The Way It Works Is That Every User Is Assigned A New Pod That Is Spawned With A Docker Image Of A Choice (One Of Them With Clearml Sdk Installed). I Managed To Configure Most Of The

It works as well . As for rebuilding the image I was not a root nor a sudoer so I had either to rebuild docker image and set it to root or to install the package while rebuilding 😉

3 years ago
0 Hi! I Have A Question Concerning Dynamic Environment Variables. I Managed To Create Some Env Variables From The Apiserver.Conf And Now I Would Like To Set Some Env Variables For My Main Clearml.Conf File. However I Am Not Sure What Is The Proper Way. I T

Hi AgitatedDove14 . I am using jupyterhub on k8s and I spawn a pod for every singleuser. I have a custom dockerfile with clearml installed however I dont want to copy the clearml.conf file in the dockerfile and instead I would prefer to pass some neccessary configurations as ENV variables. Is it possible?

3 years ago
0 Hi! I Deployed Clearml Server Along With Jupyterhub On Azure K8S (Aks). The Way It Works Is That Every User Is Assigned A New Pod That Is Spawned With A Docker Image Of A Choice (One Of Them With Clearml Sdk Installed). I Managed To Configure Most Of The

That were my thoughts too. But the jupyter/base_notebook from docker stacks that they recommend to use and from which my image inherits did not include the token in the jupyter lab run command. I don't know whether it was a bug or an intentional choice, however I was either going to change the base image, or to add a token in a postStart hook. I decided to go with the second option 😉

3 years ago
0 Hey! I Stumbled Upon Some Errors With My Workers Monitoring. I Checked Logs In My K8S Pods For Apiserver And Elasticsearch And It Seems The Problem Is There. These Are The Logs: Apiserver Logs [2021-04-23 06:19:50,209] [9] [Error] [Trains.Service_Repo] Re

Hey SuccessfulKoala55 Thank you for your answers I really appreciate it. As for elasticsearch it was indeed the index error that was created before. The reason for that is that I was trying to setup a backup for elasticsearch and mongodb using azurefiles. So the scenario is I'm using persistent volumes on k8s that are using azure file shares as storage. Then I can rebuild my cluster and use the exact same storage so that the data is persistent and I can restore my application from the last ...

3 years ago
0 Hey! I Stumbled Upon Some Errors With My Workers Monitoring. I Checked Logs In My K8S Pods For Apiserver And Elasticsearch And It Seems The Problem Is There. These Are The Logs: Apiserver Logs [2021-04-23 06:19:50,209] [9] [Error] [Trains.Service_Repo] Re

Hi SuccessfulKoala55 Thanks for the response. For elastic I am using the image http://docker.elastic.co/elasticsearch/elasticsearch:7.6.2 the one that is in manifests in clearml repo. As for the clearml images I am using the latest tags everywhere. Let me restore the vm settings for elastic and I'll let you know ;)

3 years ago
0 Hey! I Stumbled Upon Some Errors With My Workers Monitoring. I Checked Logs In My K8S Pods For Apiserver And Elasticsearch And It Seems The Problem Is There. These Are The Logs: Apiserver Logs [2021-04-23 06:19:50,209] [9] [Error] [Trains.Service_Repo] Re

Unfortunately the problem was not resolved nor by changing the vm memory settings back to 2 gb and by going back from azurefiles persistent volumes to hostPath. Seems odd as I did not have any of these issues before. I thought it might come from the changes in PV and elasticsearch settings but going back to the original settings did not resolve the issue. Shouldn't I be using the latest tag for clearml?

3 years ago
0 Hey! I Stumbled Upon Some Errors With My Workers Monitoring. I Checked Logs In My K8S Pods For Apiserver And Elasticsearch And It Seems The Problem Is There. These Are The Logs: Apiserver Logs [2021-04-23 06:19:50,209] [9] [Error] [Trains.Service_Repo] Re

And also another question came to my mind. When changing any deployment for clearml like apiserver or mongo or elasticsearch etc. do I have to redeploy everything from the scratch? I had some problems previously when changing something in apiserver forced me to redeploy everything in order for clearml to work properly. And I am wondering whether you have maybe some guidelines for that.

3 years ago