Reputation
Badges 1
51 × Eureka!it woudl be the same with a docker container and -v
then 🙂 np
so if you are using k8s, you can generate a configMap
with the relevant info and mount it
That's a cool idea. Then you pass the tolerations definition through a different pod template?
check if you have any more of those recovery reports in the mongo log, it should report progress
I think I have sent you all the existing logs
I will investigate a bit more and then check if I can recover
thank you for your time and support, I appreciate it!
Could it be it was never allocated to begin with ?
what do you mean?
Now I suspect what happened is it stayed on another node, and your k8s never took care of that
that's an interesting theory
Should I make a new issue or just reply on the one I mentioned above?
well, it's only when adding a - name
to the template
You can either use the StrictHostKeyChecking=no
or generate a known_hosts file. I don't know about other options
you can actually inject a known_hosts
file to your docker container/k8s pod through a volume
oh yea, that is true regarding mitm attacks
that would be a great solution
so if the node went down and then some other node came up, the data is lost
AgitatedDove14 actually I had to set something like:
- env: - name: PIP_INDEX_URL valueFrom: secretKeyRef: name: pip-index-url-secret key: pip-index-url-key - name: GIT_SSH_COMMAND value: "ssh -i /root/.ssh/id_rsa -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no"
I tried to add it to post_packages
and run it locally, for some reason it didn't install googlecloud-storage. However it is possible that I have an old clearml pip package installed
Yea definitely
oh cool, didn't know about this one
at the end it's just another env var
seems like I miss configured something of course. /secrets
is mounted to the agent but not to the pod template 🙂
I'll do that