and in the future I do want to have an Agent on the k8s cluster, but then this should not be a problem I guess as the user is set during Task.init
, right?
and those env variables are credentials for ClearML. Since they are taken from k8s secrets, they are the same for every user.
Oh ...
I can create secrets for every new user and set env variables accordingly, but perhaps you see a better way out?
So the thing is, if a User spins the k8s job, the user needs to pass their credentials (so the system knows who it is)... You could just pass the user's key/secret (not nice, but probably not a big issue, as everyone is an Admin anyhow, and this use just launched a job on the k8s cluster, so they are not just random guest)
Actually I would recommend to use the k8s Glue that takes a clearml job and prepares a k8s job (this way the users do not actually have to have access to the k8s cluster, and identity is stored on clearml)
and in the future I do want to have an Agent on the k8s cluster, but then this should not be a problem I guess as the user is set during
Task.init
, right?
You can override the user secret/key with OS env inside the code, but that would mean you commit it 😞
Can the user be overwritten during task configuration (I don't see such an option in the documentation)?
Hm, not really 😞 this is tied with security feature on top.
That said,
stored them as a k8s secret and they are reused whenever anyone from our ML team starts a new ML model training
Does that mean you are running an Agent on the k8s cluster? what's exactly the flow that causes your k8s credentials to be used
We have a training template that is a k8s job definition (yaml) that creates env variables inside the docker images that is used for tranining, and those env variables are credentials for ClearML. Since they are taken from k8s secrets, they are the same for every user.
I can create secrets for every new user and set env variables accordingly, but perhaps you see a better way out?