Reputation
Badges 1
53 × Eureka!you can workaround the issue mlunting the kubeconfig but I guess the issue is someway to be investigated
because kubectl inside pod uses inpod method
about minor releases they are not breaking so it should be linear
especially if itβs evicted, it should be due increasing resource usage
This is specific K8s infra management, usually I use Velero for backup
Just a quick suggestion since I have some more insight on the situation. Maybe you can look at Velero, it should be able to migrate data. If not you can simply create a new fresh install, scale everything to zero, then create some debug pod mounting old and new pvc and copy data between the two. More complex to say it than do it.
but you are starting from major for that is really old and where naming was potentially unconsistent some time so, my suggestion is to backup EVERY pv before proceeding
if you already have data over there you may import it
our data engineer directly write code in pycharm and test it on the fly with brakpoints. when good we simply commit in git and we set a tag "prod ready"
ok so it' time to create a configmap with the entire file π
if mounts are already there everywhere you can also mount directly on the nodes on a specific folder then use rancher local path provisioner
an implementation of this kind is interesting for you or do you suggest to fork? I mean, I don't want to impact your time reviewing
in Enterprise we support multiqueueing but itβs a different story
In fact it's the same we are applying to helm charts for k8s
Or do you want to dinamically mount directly an nfs endpoint? (I understood you need this one)
yes, it should be, will test this specific behaviour to be sure
moreover url exposed by nginx should be under https
It happened to me when trying many installations; can you login using http://app.clearml.home.ai/login url directly ?
with that said, the problem here is ambassador svc I think, still trying some trick
can you post output ofkubectl get po -A -n clearmlpls?
trying a new one from scratch
BoredBluewhale23 I can reproduce the issue, working on it
first I noticed a mistake I did when suggesting config, this:
these are steps for major upgrade to latest chart version
what kind of storageclass are you using on this one?
` apiserver:
additionalConfigs:
services.conf: |
auth {
# Fixed users login credentials
# No other user will be able to login
fixed_users {
enabled: true
pass_hashed: false
users: [
{
username: "jane"
password: "12345678"
name: "Jane Doe"
},
{
username: "john"
password: "12345678"...
Hi,
that's usually related IPV6/IPV4 stack configuration in your k8s cluster. Are you using just one specific stack?