
Reputation
Badges 1
43 × Eureka!AgitatedDove14 were you able to verify there was a fix released for the http 1.1 issue?
yeah we are planning on using helm. i just didn’t know if anyone had created charts for clearml with istio built into it. i’m working on creating a custom config with istio
right… it’s nginx that needs to be set. glad to hear it’s going to be updated though.
i guess i mean for anything using nodeport
it was still an issue with what i deployed. 1.0.2 i think is the version
was it ever discovered if this was pushed to the latest cloud helm chart?
we got it running. waiting for my coworker to give me the summary of what he did... he said it was something in the nginx config though
so yeah… for clearml-server
no… they function as nfs mounts and are configured as such in the current deployment.
looks like the same info that’s in https://github.com/allegroai/clearml-helm-charts
which is what i’ve been working off of. persistent volumes are completely gone.
if we host our persistent data/volumes on efs then there is not a mechanism to accommodate that in the 2.0 helm charts. i would essentially have to pull the template/values from the previous version, and fit it into the new version.
which i was just looking at something like that before you responded, so i was going down the right path.
so we essentially have to configure our own storage class in the persistence section for each dependency.
he goes more into the why of it a below that message.
one last note on this. for my use case, the persistent volume route makes more sense, because we don’t need to dynamically create the storage. they are efs mounts that already exist, so the use of a storage class wouldn’t be helpful.
yeah all the hosts have the same nfs mounts. it’s what we use to store any kind of state that we need to for apps/services to allow it to run on any host without having to duplicate data.
storage classes and provisioners don’t really work because we aren’t trying to create anything new, which is why we use persistent volumes vs storage classes.
we have a direct connect to aws from our data center, so our vpcs are treated the same as a local network resource.
the storage configuration appears to have changed quite a bit.
because you would need to add the storage class manifest
the issue moving forward is if we restart the pod we will have to manually update that again. similar to what we were doing prior to the http 1.1 setting being put in place.
i think this is still requiring some additional modification to the templates to make it work though
nginx.conf appears to be a copy of clearml.conf.template and i’m trying to figure out what we can do to modify that prior to deployment.