![Profile picture](https://clearml-web-assets.s3.amazonaws.com/scoold/avatars/JuicyFox94.png)
Reputation
Badges 1
53 × Eureka!adding @<1523701087100473344:profile|SuccessfulKoala55> to the conversation because I’m not totally sure the problem relies on ingress, it looks to be a bad token but it shouldn’t since init was good
btw a good practice is to keep infrastructural stuff decoupled from applications. What about using https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner ? After applying that chart you can simply use the generated storage class; wdyt?
this should be the form that works on Helm
` apiserver:
additionalConfigs:
services.conf: |
auth {
# Fixed users login credentials
# No other user will be able to login
fixed_users {
enabled: true
pass_hashed: false
users: [
{
username: "jane"
password: "12345678"
name: "Jane Doe"
},
{
username: "john"
password: "12345678"...
it will be easier for me to reproduce
apiserver: additionalConfigs: services.conf: |
should beapiserver: additionalConfigs: apiserver.conf: |
in this way he pod will mount a file called apiserver.conf instead of services.conf that is not the right filename for auth.
this sounds weird to me
other wise yes, if this is not an option, you can also mount what is already existing so pls open an issue in new repo helm chart and we can find a solution
if mounts are already there everywhere you can also mount directly on the nodes on a specific folder then use rancher local path provisioner
Or do you want to dinamically mount directly an nfs endpoint? (I understood you need this one)
on OSS it’s usually the only way to as many agent deployments for any queue you define
just a couple of info
There’s an incomplete PR for this None .
it’s usually needed for autoscaler to decide when and how to scale up and down
this will make autoscaler life easier knowiing exactly how much resources you need
(and any queue has it’s only basepodtemplate)
about autoscaling it’s a complex topic regarding platform management in this case. ClearML glue simply spawn pods with resources defined in template.
how you cluster reacts is about scaling infra as much as needed (karpenter or any other cloud autoscaler should work)
Hi,
that's usually related IPV6/IPV4 stack configuration in your k8s cluster. Are you using just one specific stack?
hi, pls check first message on this pr conversation; there is a checklist to be completed otherwise CI will not complete
Hi,
how did you specified it in Helm override file?
thanks for letting us know, I took a n ote for more tests on liveness, ty again!
as usual it starts small and after 5 mins discussion is getting challenging 😄 I love this stuff... let me think a bit about it I will get back to you asap on this.
about helm chart, yes, I mean adding capability of managing a configmap qith config file. If it's interesting I can raise a PR otherwise I need to fork 😄