Reputation
Badges 1
23 × Eureka!Client Version: v1.25.3
Kustomize Version: v4.5.7
Server Version: v1.25.3
or maybe should i create my own pv?
this is the problem the elastic pod shows:
Events:
Type Reason Age From Message
Warning FailedScheduling 65s (x2 over 67s) default-scheduler 0/6 nodes are available: 6 pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling.
i set it as default the results are the same
i want the storage to be on NFS eventually, the NFS is mounted to a local path on all the nodes (/data/nfs-ssd)
so the missing volume, do you have a recommendation how do i solve this?
same , when i'm installing directly: sudo helm install clearmlai-1 allegroai-clearml/clearml -n clearml
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
chart-1669648334 clearml 1 2022-11-28 17:12:14.781754603 +0200 IST deployed clearml-4.3.0 1.7.0
i'm running on bare metal cluster if that matter
hey, ill appreciate help here. i'm stuck
is there a mount i need to add?
hi i tried a new installation using the files downloaded just perform : helm install clearml . -n clearml
without changing anything.
the problem persist.
seems like i didn't define a persistant volume
i know what storageclass is.. but i don't think that this is the problem i do have one standard, seems that pv claim do not collecting it