you need to investigate why it’s still in Pending state
currently http://kubernetes.io/no-provisioner
seems like i didn't define a persistant volume
Client Version: v1.25.3
Kustomize Version: v4.5.7
Server Version: v1.25.3
some suggestions:
start working just with clearml (no agent or serving, these ones will go in after clearml is working) try a fist deploy without any override if it works start adding values to override file (without reporting everything or it will be very difficult to debug, you should not report on override file what is not overridden) do helm upgrade check problems one by one
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
chart-1669648334 clearml 1 2022-11-28 17:12:14.781754603 +0200 IST deployed clearml-4.3.0 1.7.0
can you pls put the entire helm list -A
output command?
this is the problem the elastic pod shows:
Events:
Type Reason Age From Message
Warning FailedScheduling 65s (x2 over 67s) default-scheduler 0/6 nodes are available: 6 pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling.
same , when i'm installing directly: sudo helm install clearmlai-1 allegroai-clearml/clearml -n clearml
hi i tried a new installation using the files downloaded just perform : helm install clearml . -n clearml
without changing anything.
the problem persist.
i know what storageclass is.. but i don't think that this is the problem i do have one standard, seems that pv claim do not collecting it
so the missing volume, do you have a recommendation how do i solve this?
Also k8s distribution and version you are using can be useful
can you also show output of kubectl get po
of the namespace where you installaed clearml?
i want the storage to be on NFS eventually, the NFS is mounted to a local path on all the nodes (/data/nfs-ssd)
i set it as default the results are the same
with that in place k8s should be able to provisione pvc
i'm running on bare metal cluster if that matter