Reputation
Badges 1
43 × Eureka!i guess i mean for anything using nodeport
it was still an issue with what i deployed. 1.0.2 i think is the version
nginx.conf appears to be a copy of clearml.conf.template and i’m trying to figure out what we can do to modify that prior to deployment.
looks like at the end of the day we removed proxy_set_header Host $host;
and use the fqdn for the proxy_pass line
he goes more into the why of it a below that message.
yeah looking into it more it’s going to require a bit of extra setup whether i use the old storage templates or set it up with storage an external storage class and all that.
which i was just looking at something like that before you responded, so i was going down the right path.
was it ever discovered if this was pushed to the latest cloud helm chart?
istio expects a certain host and whatever is getting passed by the pod isn't right.
i have it deployed successfully with istio. the only thing we had to do to get it to work was to modify the nginx.conf in the webserver pod to allow http 1.1
because you would need to add the storage class manifest
looks like the same info that’s in https://github.com/allegroai/clearml-helm-charts
which is what i’ve been working off of. persistent volumes are completely gone.
so we essentially have to configure our own storage class in the persistence section for each dependency.
AgitatedDove14 were you able to verify there was a fix released for the http 1.1 issue?
we have a direct connect to aws from our data center, so our vpcs are treated the same as a local network resource.
i think this is still requiring some additional modification to the templates to make it work though
istio routes based off of hostname so the app hostname was being passed causing the loop
the issue moving forward is if we restart the pod we will have to manually update that again. similar to what we were doing prior to the http 1.1 setting being put in place.
yeah we are planning on using helm. i just didn’t know if anyone had created charts for clearml with istio built into it. i’m working on creating a custom config with istio
right… it’s nginx that needs to be set. glad to hear it’s going to be updated though.
one last note on this. for my use case, the persistent volume route makes more sense, because we don’t need to dynamically create the storage. they are efs mounts that already exist, so the use of a storage class wouldn’t be helpful.
no… they function as nfs mounts and are configured as such in the current deployment.
seems to only happen when going through the app