Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Thank You For Your Help So Far. I Have A Question About Trains Authentication And Privacy When Deploying On K8S. I Want Integrate Building A Trains-Server Into Our Iac. Now That I Got A Server To Work With An Agent Deployment Im Thinking About Authorizati

Thank you for your help so far.
I have a question about trains authentication and privacy when deploying on k8s.
I want integrate building a trains-server into our IAC. Now that i got a server to work with an agent deployment Im thinking about authorization and authentication.

How do you recommend doing access control in trains?
Would you run the k8s cluster in a vpc and auto-provision DNS entries for the train servers in the VPC? In such a case i suppose the users will have to connect to the virtual network via VPN on their workstations. The most straightforward bad way i can think of is to give our ML team k8s config files with read access to the trains namespace which they can use to port-forward to the services. This way the k8s authentication will be used instead of a trains authentication. They will still be able to log in as anyone which is not ideal but also better than having the server accessible from the outside.
I saw that there is a config file where you can specify specific users and passwords, but it currently requires

  1. going into the pod running the api server and adding a config file.
  2. restarting the server.
  3. doing it all over again if the pod crashes etc.

I think the k8s way to do this would be to use mounted config maps and secrets.
If i end up going with specified users for the setup for my company, I will probably have to change the helm chart to support sourcing credentials from secrets etc.
In that case, would contributions to the trains-helm repo be welcome? Who should I talk to about it?

  
  
Posted 3 years ago
Votes Newest

Answers 6


AgitatedDove14 SuccessfulKoala55
Yes this makes alot more sense now. Thank you.
Ill give it a go. Once i have something that works ill make a github issue to see if its something you would like to add to the repo.

Thank you very much.

  
  
Posted 3 years ago

SuccessfulKoala55 So far, I only so how the credentials are passes in the config files. Can you point me to where it looks for env vars for authentication?

AgitatedDove14
I thought about the config maps for the credentials. Having the urls of each server componenet (api,web,file) makes sense. The problem with an external load balancer is that I expose the servers outside of the cluster, which im trying to avoid. It might be the case that my thinking about this is mistaken alltogether and I should expose stuff outside the cluster but than i have to configure/maintain additional VPNs/VPCs which is more hassle and more money to the cloud providers seeing that k8s already provides its own private network and can be used with vpn clients. Howevet it might be inevitable for production needs in which case i would have to swallow this pill anyway. There is the issue of the web-server being static and thus requiring a fixed url mapping between the web server and the other server in the browser, but i assume solving this would be too much trouble for too little gains on your part i assume.

  
  
Posted 3 years ago

Contributions are always welcome 🙂
The best way is to open a GitHub issue.
As for user/password configuration, the server can receive these details from environment variables, so you could easily provide them as part of the chart or by integrating with mounted secrets etc.

  
  
Posted 3 years ago

I think that AgitatedDove14 ’s suggestion is better:

mount the configuration file (the one holding the user/pass) into the pod from a persistent volume

If you still want to use env vars (which would also have to contain the passwords) - let me know

  
  
Posted 3 years ago

Hi ColossalAnt7
Following on SuccessfulKoala55 answer

I saw that there is a config file where you can specify specific users and passwords, but it currently requires

  1. mount the configuration file (the one holding the user/pass) into the pod from a persistent volume .

I think the k8s way to do this would be to use mounted config maps and secrets.

You can use ConfigMaps to make sure the routing is always correct, then add a load-balancer (a.k.a a fixed IP) for the users access.
This way the users always access IP:8008/8080/8081 while the ConfigMap is doing the outing to the actual pod.

What do you think?

  
  
Posted 3 years ago

ColossalAnt7 I would do the following:
Configure trains-server user/pass, mounting the API server configuration file as pointed in the trains-server documentation (intermediate temporary step) Start by providing the ML guys with a VPN access that allows them to access directly the trains-server api/web/file pos (caveat is the IP/sub-domain needs to be solved) Configure a ConfigMap to do the routing/ingest (this solves the IP/Sub-Domain issue) and allow the VPN to access the single entrypoint as pointed by the ConfigMap If needed open the single ingestion IP to the outside world, and remove the need for a VPN.Make sense ?

  
  
Posted 3 years ago
548 Views
6 Answers
3 years ago
one year ago
Tags
Similar posts