No worries, and I hope you manage to get that backup.
(I mean new logs, while we are here did it report any progress)
Could it be it was never allocated to begin with ?
but the PV seems to be just a path to the labeled node
check if you have any more of those recovery reports in the mongo log, it should report progress
I think I have sent you all the existing logs
And if this is the case, that would explain the empty elastic as well
Now I suspect what happened is it stayed on another node, and your k8s never took care of that
That somehow the PV never worked and it was all local inside the pod
Meaning the node restarted (or actually moved)
In that case, I think it is stuck on a previous Node, I can;t think of any other reason.
Do you have something else on the same PV that was lost ? like api server configuration?
so if the node went down and then some other node came up, the data is lost
yea the api server configuration also went away
okay that proves it
yea the api server configuration also went away
I will investigate a bit more and then check if I can recover
thank you for your time and support, I appreciate it!
Oh dear, I think your theory might be correct, and this is just the mongo preallocating storage.
Which means the entire /opt/trains just disappeared
Now I suspect what happened is it stayed on another node, and your k8s never took care of that
that's an interesting theory
Could it be it was never allocated to begin with ?
what do you mean?
so if the node went down and then some other node came up, the data is lost
That might be the case. where is the k8s running ? cloud service ?