Reputation
Badges 1
43 × Eureka!looks like at the end of the day we removed proxy_set_header Host $host;
and use the fqdn for the proxy_pass line
we got it running. waiting for my coworker to give me the summary of what he did... he said it was something in the nginx config though
which i was just looking at something like that before you responded, so i was going down the right path.
as far as kube node is concerned it is an nfs mount
we have a direct connect to aws from our data center, so our vpcs are treated the same as a local network resource.
looks like the same info that’s in https://github.com/allegroai/clearml-helm-charts
which is what i’ve been working off of. persistent volumes are completely gone.
` {
"meta": {
"id": "aac901e3e58c4381852b0fe1d227c732",
"trx": "aac901e3e58c4381852b0fe1d227c732",
"endpoint": {
"name": "login.supported_modes",
"requested_version": "2.13",
"actual_version": "1.0"
},
"result_code": 200,
"result_subcode": 0,
"result_msg": "OK",
"error_stack": "",
"error_data": {
}
},
"data": {
"authenticated": false,
"basic": {
"enabled": false,
"guest": {
"enabled": false
...
the storage configuration appears to have changed quite a bit.
because you would need to add the storage class manifest
so it seems to be how istio is handling the routing. if we use the internal service address in the nginx configs it seems to go ok.
but there are a bunch of 405 errors in the log
if we host our persistent data/volumes on efs then there is not a mechanism to accommodate that in the 2.0 helm charts. i would essentially have to pull the template/values from the previous version, and fit it into the new version.
yeah looking into it more it’s going to require a bit of extra setup whether i use the old storage templates or set it up with storage an external storage class and all that.
istio routes based off of hostname so the app hostname was being passed causing the loop
istio expects a certain host and whatever is getting passed by the pod isn't right.
i think this is still requiring some additional modification to the templates to make it work though
he goes more into the why of it a below that message.
looks like he hardcoded the api fqdn in proxy_set_header Host
instead of using $host
seems to only happen when going through the app
one last note on this. for my use case, the persistent volume route makes more sense, because we don’t need to dynamically create the storage. they are efs mounts that already exist, so the use of a storage class wouldn’t be helpful.