i set it as default the results are the same
so the missing volume, do you have a recommendation how do i solve this?
currently http://kubernetes.io/no-provisioner
i want the storage to be on NFS eventually, the NFS is mounted to a local path on all the nodes (/data/nfs-ssd)
hi i tried a new installation using the files downloaded just perform : helm install clearml . -n clearml
without changing anything.
the problem persist.
I’m not sure why in your case liveness probe is trying to access a non localhost ip. What is the version of the chart you are trying to install? helm list -A
i know what storageclass is.. but i don't think that this is the problem i do have one standard, seems that pv claim do not collecting it
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
chart-1669648334 clearml 1 2022-11-28 17:12:14.781754603 +0200 IST deployed clearml-4.3.0 1.7.0
Also k8s distribution and version you are using can be useful
this is the problem the elastic pod shows:
Events:
Type Reason Age From Message
Warning FailedScheduling 65s (x2 over 67s) default-scheduler 0/6 nodes are available: 6 pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling.
with that in place k8s should be able to provisione pvc
you need to investigate why it’s still in Pending state
i'm running on bare metal cluster if that matter
can you pls put the entire helm list -A output command?
seems like i didn't define a persistant volume
same , when i'm installing directly: sudo helm install clearmlai-1 allegroai-clearml/clearml -n clearml





