other wise yes, if this is not an option, you can also mount what is already existing so pls open an issue in new repo helm chart and we can find a solution
if you already have data over there you may import it
if mounts are already there everywhere you can also mount directly on the nodes on a specific folder then use rancher local path provisioner
yeah all the hosts have the same nfs mounts. it’s what we use to store any kind of state that we need to for apps/services to allow it to run on any host without having to duplicate data.
storage classes and provisioners don’t really work because we aren’t trying to create anything new, which is why we use persistent volumes vs storage classes.
btw a good practice is to keep infrastructural stuff decoupled from applications. What about using https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner ? After applying that chart you can simply use the generated storage class; wdyt?
I think we can find a solution pretty quickly after some checks. Can you pls open an issue on new helm chart repo so I can take care of it in some day?
Or do you want to dinamically mount directly an nfs endpoint? (I understood you need this one)
Hi BurlySeagull48 , I’m interested in your use case and I think we can find a solution. NFS mounts have the same path in every node?
neat! please update on your progress, maybe we should add an upgrade section once you have the details worked out
we have a direct connect to aws from our data center, so our vpcs are treated the same as a local network resource.
as far as kube node is concerned it is an nfs mount
no… they function as nfs mounts and are configured as such in the current deployment.
they are efs mounts that already exist
Hmm, that might be more complicated to restore, right ?
one last note on this. for my use case, the persistent volume route makes more sense, because we don’t need to dynamically create the storage. they are efs mounts that already exist, so the use of a storage class wouldn’t be helpful.
yeah looking into it more it’s going to require a bit of extra setup whether i use the old storage templates or set it up with storage an external storage class and all that.
because you would need to add the storage class manifest
i think this is still requiring some additional modification to the templates to make it work though
which i was just looking at something like that before you responded, so i was going down the right path.
so we essentially have to configure our own storage class in the persistence section for each dependency.
he goes more into the why of it a below that message.
AgitatedDove14 I think you're right, but I would also see what JuicyFox94 has to say about it 🙂
I think this is the discussion you are after:
https://clearml.slack.com/archives/C01H5VAUZ8R/p1612452197004900?thread_ts=1612273112.002400&cid=C01H5VAUZ8R
if we host our persistent data/volumes on efs then there is not a mechanism to accommodate that in the 2.0 helm charts. i would essentially have to pull the template/values from the previous version, and fit it into the new version.
I think this is the only mount you need:
Data persisted in every Kubernetes volume by ClearML will be accessible in /tmp/clearml-kind folder on the host.
SuccessfulKoala55 is this correct ?
looks like the same info that’s in https://github.com/allegroai/clearml-helm-charts
which is what i’ve been working off of. persistent volumes are completely gone.
the storage configuration appears to have changed quite a bit.
Yes I think this is part of an the cloud ready effort.
I think you can find the definitions here:
https://artifacthub.io/packages/helm/allegroai/clearml
the storage configuration appears to have changed quite a bit.