
Reputation
Badges 1
43 × Eureka!istio expects a certain host and whatever is getting passed by the pod isn't right.
he goes more into the why of it a below that message.
i think this is still requiring some additional modification to the templates to make it work though
because you would need to add the storage class manifest
yeah all the hosts have the same nfs mounts. it’s what we use to store any kind of state that we need to for apps/services to allow it to run on any host without having to duplicate data.
storage classes and provisioners don’t really work because we aren’t trying to create anything new, which is why we use persistent volumes vs storage classes.
so we essentially have to configure our own storage class in the persistence section for each dependency.
so it seems to be how istio is handling the routing. if we use the internal service address in the nginx configs it seems to go ok.
one last note on this. for my use case, the persistent volume route makes more sense, because we don’t need to dynamically create the storage. they are efs mounts that already exist, so the use of a storage class wouldn’t be helpful.
nginx.conf appears to be a copy of clearml.conf.template and i’m trying to figure out what we can do to modify that prior to deployment.
i guess i mean for anything using nodeport
yeah we are planning on using helm. i just didn’t know if anyone had created charts for clearml with istio built into it. i’m working on creating a custom config with istio
it was still an issue with what i deployed. 1.0.2 i think is the version
AgitatedDove14 were you able to verify there was a fix released for the http 1.1 issue?
was it ever discovered if this was pushed to the latest cloud helm chart?
right… it’s nginx that needs to be set. glad to hear it’s going to be updated though.
i have it deployed successfully with istio. the only thing we had to do to get it to work was to modify the nginx.conf in the webserver pod to allow http 1.1
which i was just looking at something like that before you responded, so i was going down the right path.
we have a direct connect to aws from our data center, so our vpcs are treated the same as a local network resource.
no… they function as nfs mounts and are configured as such in the current deployment.
the storage configuration appears to have changed quite a bit.
as far as kube node is concerned it is an nfs mount
if we host our persistent data/volumes on efs then there is not a mechanism to accommodate that in the 2.0 helm charts. i would essentially have to pull the template/values from the previous version, and fit it into the new version.
looks like the same info that’s in https://github.com/allegroai/clearml-helm-charts
which is what i’ve been working off of. persistent volumes are completely gone.
so yeah… for clearml-server
looks like at the end of the day we removed proxy_set_header Host $host;
and use the fqdn for the proxy_pass line
yeah looking into it more it’s going to require a bit of extra setup whether i use the old storage templates or set it up with storage an external storage class and all that.