Reputation
Badges 1
43 × Eureka!so yeah… for clearml-server
i have it deployed successfully with istio. the only thing we had to do to get it to work was to modify the nginx.conf in the webserver pod to allow http 1.1
looks like he hardcoded the api fqdn in proxy_set_header Host instead of using $host
istio expects a certain host and whatever is getting passed by the pod isn't right.
i guess i mean for anything using nodeport
looks like at the end of the day we removed proxy_set_header Host $host; and use the fqdn for the proxy_pass line
istio routes based off of hostname so the app hostname was being passed causing the loop
one last note on this. for my use case, the persistent volume route makes more sense, because we don’t need to dynamically create the storage. they are efs mounts that already exist, so the use of a storage class wouldn’t be helpful.
if we host our persistent data/volumes on efs then there is not a mechanism to accommodate that in the 2.0 helm charts. i would essentially have to pull the template/values from the previous version, and fit it into the new version.
so we essentially have to configure our own storage class in the persistence section for each dependency.
as far as kube node is concerned it is an nfs mount
he goes more into the why of it a below that message.
yeah looking into it more it’s going to require a bit of extra setup whether i use the old storage templates or set it up with storage an external storage class and all that.
the issue moving forward is if we restart the pod we will have to manually update that again. similar to what we were doing prior to the http 1.1 setting being put in place.
` {
"meta": {
"id": "aac901e3e58c4381852b0fe1d227c732",
"trx": "aac901e3e58c4381852b0fe1d227c732",
"endpoint": {
"name": "login.supported_modes",
"requested_version": "2.13",
"actual_version": "1.0"
},
"result_code": 200,
"result_subcode": 0,
"result_msg": "OK",
"error_stack": "",
"error_data": {
}
},
"data": {
"authenticated": false,
"basic": {
"enabled": false,
"guest": {
"enabled": false
...
the storage configuration appears to have changed quite a bit.
AgitatedDove14 were you able to verify there was a fix released for the http 1.1 issue?
which i was just looking at something like that before you responded, so i was going down the right path.
we have a direct connect to aws from our data center, so our vpcs are treated the same as a local network resource.
was it ever discovered if this was pushed to the latest cloud helm chart?
yeah we are planning on using helm. i just didn’t know if anyone had created charts for clearml with istio built into it. i’m working on creating a custom config with istio
right… it’s nginx that needs to be set. glad to hear it’s going to be updated though.
looks like the same info that’s in https://github.com/allegroai/clearml-helm-charts
which is what i’ve been working off of. persistent volumes are completely gone.
no… they function as nfs mounts and are configured as such in the current deployment.
because you would need to add the storage class manifest