Reputation
Badges 1
53 × Eureka!It happened to me when trying many installations; can you login using http://app.clearml.home.ai/login url directly ?
ok so you are on chart major verison 4 while we are now on 6. Let me check one minute pls
but you are starting from major for that is really old and where naming was potentially unconsistent some time so, my suggestion is to backup EVERY pv before proceeding
btw as general rule, itβs always safer to take a data backup before a chart upgrade regardless of version level
Ok, what I need is the exact version of the chart (not app) for clearml and clearml-agent
internal migrations are directly done by apiserver on startup
ok got it, are you able to access the system bypassing nginx with http://<Server Address>:8080
?
Hi ApprehensiveSeahorse83 , today we released clearml-agent
chart that just installs glue agent. My suggestion is to disable k8s glue and any other agent from the clearml
chart and install more than one clearml-agent
chart in different namespaces. In this way you will be able to have k8s glue for every queue (cpu and gpu).
if they are in kubernetes you can simply use k8s glue
In this case I suggest to give a try to k8s-glue that is there by default in latest chart version
but I will try to find something good for you
Exactly, these are system accounts
regardless of this I probably need to add some more detailed explanations on credentials configs
I think yes, at least this is whatI saw in docs
SuccessfulKoala55 after looking at the issue Iβm a bit confused π ; as far as I can see there is no way to pass any parameter to clearml-agent in daemon mode to push log to stdout. Can you confirm it? (If yes I need to find some workaround)
ty AgitatedDove14 , your fixes work like a charm. As reward I opened another one https://github.com/allegroai/clearml/issues/423 sorry for that π
Will cook something asap
can you also show output of kubectl get po
of the namespace where you installaed clearml?
this is basic k8s management that is not strictly related this chart. my suggestion is to have a default storageclass that will be able to provide the right pv/pvc for any deployment you are going to have on the cluster. I suggest to start from here: https://kubernetes.io/docs/concepts/storage/storage-classes/