Reputation
Badges 1
43 × Eureka!the issue moving forward is if we restart the pod we will have to manually update that again. similar to what we were doing prior to the http 1.1 setting being put in place.
yeah all the hosts have the same nfs mounts. it’s what we use to store any kind of state that we need to for apps/services to allow it to run on any host without having to duplicate data.
storage classes and provisioners don’t really work because we aren’t trying to create anything new, which is why we use persistent volumes vs storage classes.
so we essentially have to configure our own storage class in the persistence section for each dependency.
i have it deployed successfully with istio. the only thing we had to do to get it to work was to modify the nginx.conf in the webserver pod to allow http 1.1
AgitatedDove14 were you able to verify there was a fix released for the http 1.1 issue?
it was still an issue with what i deployed. 1.0.2 i think is the version
yeah we are planning on using helm. i just didn’t know if anyone had created charts for clearml with istio built into it. i’m working on creating a custom config with istio
nginx.conf appears to be a copy of clearml.conf.template and i’m trying to figure out what we can do to modify that prior to deployment.
was it ever discovered if this was pushed to the latest cloud helm chart?
right… it’s nginx that needs to be set. glad to hear it’s going to be updated though.
i guess i mean for anything using nodeport
so yeah… for clearml-server
so it seems to be how istio is handling the routing. if we use the internal service address in the nginx configs it seems to go ok.