
Reputation
Badges 1
53 × Eureka!that sc is not set as default
In this case I suggest to give a try to k8s-glue that is there by default in latest chart version
Hey ApprehensiveSeahorse83 , I didn’t forget about you, it’s just a busy time for me; will answer during the day after a couple of more tests on my testing env.
I’m going to investigate this specific use case and will get back to you
but I will try to find something good for you
O k, I’d like to test it more with you; credentials exposed in chart values are system ones and it’s better to not change them; let’s forget about them for now. If you create a new accesskey/secretkey pair in ui, you should use these ones in your agents and they shuld not get overwritten in any way; can you confirm it works without touching credentials
section?
mmmmm should not be related chart as far as I know, I’m going to ping SuccessfulKoala55 ; maybe he can chime in because I’m not sure why it’s happening
I’m guessing if I can set it trough Helm chart by default, will investigate by the end of the week, ty ScrawnyLion96 to point me on this interesting behavior!
thanks for letting us know, I took a n ote for more tests on liveness, ty again!
additionaConfigs is a section under apiserver
on OSS it’s usually the only way to as many agent deployments for any queue you define
(and any queue has it’s only basepodtemplate)
just a couple of info
first I noticed a mistake I did when suggesting config, this:
Interesting use case, maybe we can create multiple k8s agents for different queues
Hi DeliciousBluewhale87 , I'm already using an on-premise config (with GitOps paradigm) using a custom helm chart. maybe this is interesting for you
this is the state of the cluster https://github.com/valeriano-manassero/mlops-k8s-infra
In this case I apologize for confusion. If you are going for AWS autoscaler it's better to follow official way to go, the solution I proposed is for an onpremise cluster containing every componenet without autoscaler. sorry for
moreover if you are using minikube you can take a try on official helm chart https://github.com/allegroai/clearml-server-helm
so you should be able to pass additional stuff in this field directly during Helm apply
additionalConfigs: auth.conf: | auth { # Fixed users login credentials # No other user will be able to login fixed_users { enabled: true pass_hashed: false users: [ { username: "jane" password: "12345678" name: "Jane Doe" }, { username: "john" password: "12345678" name: "John Doe" }, ] } }
this will make autoscaler life easier knowiing exactly how much resources you need
I absolutely need to improve the persistence part of this chart 😄
for fileserver the persistent volume need to be provisioned by a storageclass. Ususally I always set it to standard
becase it’s commonly used in public cloud providers
i will release a new chart version with no need to set a default storage class like I asked you to do today
so we also improved the chart, superhappy 😄
regardless of this I probably need to add some more detailed explanations on credentials configs
it’s a queue used by the agent just for internal scheduling purposes