What Chart version are you trying to upgrade from?
if yopu instruct apiserver to use s3 fileserver will not basically used anymore (I need SuccessfulKoala55 confirmation to be 100% sure, Im more infra guy :D )
Of course. We'd like to use S3 backends anyway, I couldn't spot exactly where to configure this in the chart (so it's defined in the individual agent's configuration)
there are workarounds tbh but they are tricks that require a lot of k8s espertise and they are risky
Okay, I'll test it out by trying to downgrade to 4.0.0 and then upgrade to 4.1.2
Just to make sure, the chart_ref
is allegroai/clearml
right? (for some reason we had clearml/clearml
and it seems like it previously worked?)
about minor releases they are not breaking so it should be linear
Could you provide a more complete set of instructions, for the less inclined?
How would I backup the data in future times etc?
Full log:command: /usr/sbin/helm --version=4.1.2 upgrade -i --reset-values --wait -f=/tmp/tmp77d9ecye.yml clearml clearml/clearml msg: |- Failure when executing Helm command. Exited 1. stdout: stderr: W0728 09:23:47.076465 2345 warnings.go:70] policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget W0728 09:23:47.126364 2345 warnings.go:70] policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget W0728 09:23:47.188124 2345 warnings.go:70] policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget Error: UPGRADE FAILED: cannot patch "clearml-fileserver-data" with kind PersistentVolumeClaim: PersistentVolumeClaim "clearml-fileserver-data" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ ... // 2 identical fields Resources: {Requests: {s"storage": {i: {...}, s: "50Gi", Format: "BinarySI"}}}, VolumeName: "", - StorageClassName: nil, + StorageClassName: &"standard", VolumeMode: &"Filesystem", DataSource: nil, DataSourceRef: nil, } stderr: |- W0728 09:23:47.076465 2345 warnings.go:70] policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget W0728 09:23:47.126364 2345 warnings.go:70] policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget W0728 09:23:47.188124 2345 warnings.go:70] policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget Error: UPGRADE FAILED: cannot patch "clearml-fileserver-data" with kind PersistentVolumeClaim: PersistentVolumeClaim "clearml-fileserver-data" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ ... // 2 identical fields Resources: {Requests: {s"storage": {i: {...}, s: "50Gi", Format: "BinarySI"}}}, VolumeName: "", - StorageClassName: nil, + StorageClassName: &"standard", VolumeMode: &"Filesystem", DataSource: nil, DataSourceRef: nil, } stderr_lines: <omitted> stdout: '' stdout_lines: <omitted>
Removing the PVC is just setting the state to absent AFAIK
you can create a specific config like one in https://clear.ml/docs/latest/docs/integrations/storage/
At any case, if we were upgrading from e.g. 4.0.0 to 4.1.2, this shouldn't have happened?
Hm, I'm not sure I follow 🤔 How does the API server config relate to the file server?
Hi UnevenDolphin73 , maybe JuicyFox94 or SuccessfulKoala55 can assist
ok, for mayor version upgrade my suggestion is to backup the data somewhere and do a clean install after removing the pvc/pv
https://artifacthub.io/packages/helm/allegroai/clearml and click on install button so you can see details about repo
moreover I usually prefer to have S3 backends if in AWS oir MinIO
For now this is okay - no data lost, really - but I'd like to make sure we're not missing any steps in the next upgrade
but again, they are infrastructural decisions that need to be taken before talking software
This is specific K8s infra management, usually I use Velero for backup