Hi SubstantialElk6
We will be running some GUI applications so is it possible to forward the GUI to the clearml-session?
If you can directly access the machine running the agent, yes you could. If not reverse proxy is in the working 😉
We have a rather locked down environment so I would need a clear view of the network view and the ports associated.
Basically all connections are outgoing only, with the exception of the clearml-server (listening on ports 8008 8080 8081)
... if we have direct access to the Kubernetes worker when we run K8S glue?
Correct, if you have a direct access to the Node (on your k8s cluster) from your laptop (assuming the clearml-session is running from the laptop), everything should work
Unfortunately due to security, clients can't have direct access to the nodes. Is there any possible workarounds at the moment?
Ok thanks, we'll try it out on next availability.
Glue machine or K8S Worker machine?
The K8s worker machine.
You could also configure an ingest service as part of the template, so they always have an external port mapped into the port.
If we setup a ingress with MetalLB or Nginx, and added LoadBalancer into the template yaml, do you think this will work?
I would configure the k8s glue pod template to have "Service" port forward to the pod's 10022 port (default SSH port for the clearml-session), basically allowing the k8s ingest to allocate a port to the pod.
To test if it worked, spin the clearml session, and try to SSH to the external IP:port.
Once that works you can basically tell the clearml-session client which port/gateway it should ssh to
Hi AgitatedDove14 , thanks.
In this case i am running k8s glue (machine glue), which will then spawn off pods in kubernetes worker (machine worker). So when you say direct access, are you refering to the Glue machine or K8S Worker machine?
f you can directly access the machine running the agent, yes you could. If not reverse proxy is in the working
Hi AgitatedDove14 , i might have misunderstood your previous comment above. Do you mean that clearml-session can only work regardless of whether xforwarding is configured, if we have direct access to the Kubernetes worker when we run K8S glue?
We did some testing today and clearml-session tried to tunnel with a k8s cluster ip, and thus failed.
If we setup a ingress with MetalLB or Nginx, and added LoadBalancer into the template yaml, do you think this will work?