I can have much more flexibility and security using Kubernetes native approaches. I can host multiple sessions behind a single LB with different host headers etc. A lot of possibilities. ๐
+1. The k8s helper needs to be more k8s native.
Something like the Gitlab Kubernetes Executor could work well.
Ssh is used to access the actual container, all other communication is tunneled on top of it. What exactly is the reason to bind to 0.0.0.0 ? Maybe it could be a flag that you, but I'm not sure in what's the scenario and what are we solving, thoughts?
Can makes some PRs around this. Playing with some changes already for my setup
DisgustedDove53 , TrickySheep9
I'm all for it!
I can think of two options here, (1) use the k8s glue + apply template with ports mode see discussion https://clearml.slack.com/archives/CTK20V944/p1628091020175100
(2) create an interface (queue) to launch arbitrary job on the k8s cluster, with the full pod definition on the Task. This will allow the clearml-session to setup everything from the get go.
How would you interface with the k8s operator, and what exactly will it do?
(BTW: the reasoning for the SSH is to have the ability to have as many services as needed running inside the pod container, without the need to ingest multiple ports, and to verify all connections are end-to-end secure)
Iโm using it in clearml k8s integration. So iโd rather avoid all extra stuff and bind the jupyter directly. Then use Istio, etc to secure access to it. I donโt think using ssh tunnels brings about any functionality or advantages of any sort when using kubernetes.
The K8sIntegration can be made into a nice operator for example and deeply integrate into ClearML itself. :grinning_face_with_star_eyes:
I think the whole project can be more cloud friendly. I spent a lot of time adapting it to our k8s environment. I am willing also to contribute. I think a roadmap should be created for more k8s integration and then we can start. ๐