Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi, I Am Running Clearml Open Source Version On Eks Kubernetes And Trying To Set The Web Login Configurations As Described Here:

Hi, I am running ClearML open source version on EKS Kubernetes and trying to set the web login configurations as described here: https://clear.ml/docs/latest/docs/deploying_clearml/clearml_server_config/#web-login-authentication

Can someone please point me towards where and how this configuration should be made on the cluster? is it in the values.yaml of the installation?

Thanks!

  
  
Posted one year ago
Votes Newest

Answers 26


Hi CostlyOstrich36 , I’ve installed Ambassador as the ingress and pointed the domain URLs to the Loadbalancer’s host.

In the values.yaml I have the following and can reach the web UI through the http://app.clearml.xxxx.com URL:
ingress: name: clearml-server-ingress annotations: {} app: enabled: false hostName: "app.clearml.xxxx.com" tlsSecretName: "" annotations: {kubernetes.io/ingress.class: ambassador} api: enabled: false hostName: "api.clearml.xxxx.com" tlsSecretName: "" annotations: {kubernetes.io/ingress.class: ambassador} files: enabled: false hostName: "file.clearml.xxxx.com" tlsSecretName: "" annotations: {kubernetes.io/ingress.class: ambassador}

  
  
Posted one year ago

Hi BoredBluewhale23 ,
You can simply use apiserver.additionalConfigs in your values.yaml to specify the required configuration options are described in the link you sent

  
  
Posted one year ago

Hi BoredBluewhale23 ,

How did you configure the apiserver when you raised the EKS K8S cluster?

  
  
Posted one year ago

apiserver: additionalConfigs: services.conf: | auth { # Fixed users login credentials # No other user will be able to login fixed_users { enabled: true pass_hashed: false users: [ { username: "jane" password: "12345678" name: "Jane Doe" }, { username: "john" password: "12345678" name: "John Doe" }, ] } }

  
  
Posted one year ago

JuicyFox94 perhaps it would be beneficial to put this example in the repo's README (not the actual auth... content, but the overall structure) 🙂

  
  
Posted one year ago

(internally it will generate a file called services.conf in /opt/clearml/config folder in apiserver pod with content added) /cc SuccessfulKoala55

  
  
Posted one year ago

It's in values.yaml but yes, I need to improve this part, I agree

  
  
Posted one year ago

BoredBluewhale23 I can reproduce the issue, working on it

  
  
Posted one year ago

this should be the form that works on Helm

  
  
Posted one year ago

Will cook something asap

  
  
Posted one year ago

apiserver: additionalConfigs: services.conf: |should be
apiserver: additionalConfigs: apiserver.conf: |in this way he pod will mount a file called apiserver.conf instead of services.conf that is not the right filename for auth.

  
  
Posted one year ago

can you pls share how did you installed ambassador and what custom configs (if any) were applied?

  
  
Posted one year ago

with that said, the problem here is ambassador svc I think, still trying some trick

  
  
Posted one year ago

SuccessfulKoala55 Thanks for the help!

Do you add in the config like below?

And why is it done under the apiserver config and not the webserver config?

additionalConfigs: | auth { # Fixed users login credentials # No other user will be able to login fixed_users { enabled: true pass_hashed: false users: [ { username: "jane" password: "12345678" name: "Jane Doe" }, { username: "john" password: "12345678" name: "John Doe" }, ] } }
I do get the following error when using the helm upgrade command, do you know what is the problem?

`
$ helm upgrade clearml-server allegroai/clearml -n clearml --create-namespace --values values.yaml

coalesce.go:199: warning: cannot overwrite table with non table for tolerations (map[])
coalesce.go:199: warning: cannot overwrite table with non table for additionalConfigs (map[])
coalesce.go:199: warning: cannot overwrite table with non table for tolerations (map[])
coalesce.go:199: warning: cannot overwrite table with non table for additionalConfigs (map[])
Error: UPGRADE FAILED: template: clearml/templates/configmap-apiserver.yaml:9:33: executing "clearml/templates/configmap-apiserver.yaml" at <.Values.apiserver.additionalConfigs>: range can't iterate over auth {

Fixed users login credentials

No other user will be able to login

fixed_users {
enabled: true
pass_hashed: false
users: [
{
username: "jane"
password: "12345678"
name: "Jane Doe"
},
{
username: "john"
password: "12345678"
name: "John Doe"
},
]
}
} `
Thanks in advance!

  
  
Posted one year ago

It's related to the apiserver since you're actually defining the user details, not the way they will sign in (webserver is simply what serves the WebApp).

  
  
Posted one year ago

As to the error, JuicyFox94 do you have an idea?

  
  
Posted one year ago

let me check

  
  
Posted one year ago

Hi JuicyFox94 I get the following output without adding the adding the users to the additionalConfigs:

kubectl get po -A -n clearml NAMESPACE NAME READY STATUS RESTARTS AGE ambassador ambassador-5cfbb8d4c8-8792m 1/1 Running 1 42h ambassador ambassador-5cfbb8d4c8-fvz6s 1/1 Running 0 42h ambassador ambassador-5cfbb8d4c8-hgbpb 1/1 Running 0 42h ambassador ambassador-agent-845dfc94c-zm79v 1/1 Running 0 42h ambassador ambassador-redis-64b7c668b9-bj2rg 1/1 Running 0 42h clearml clearml-elastic-master-0 1/1 Running 0 16h clearml clearml-server-agent-group-cpu-agent-5df4476cfc-wjblx 1/1 Running 40 16h clearml clearml-server-apiserver-5bffbb5ddb-t9c4k 1/1 Running 39 16h clearml clearml-server-fileserver-67bf58c47d-rg9hv 1/1 Running 0 16h clearml clearml-server-mongodb-86648c4756-ddlqm 1/1 Running 0 16h clearml clearml-server-redis-master-0 1/1 Running 0 16h clearml clearml-server-webserver-864b4b6868-vrddb 1/1 Running 9 16h kube-system aws-node-p4cqr 1/1 Running 0 7d12h kube-system aws-node-tk6tv 1/1 Running 0 7d12h kube-system coredns-745979c988-2rj7l 1/1 Running 0 7d15h kube-system coredns-745979c988-6cd5l 1/1 Running 0 7d15h kube-system kube-proxy-d9m94 1/1 Running 0 7d12h kube-system kube-proxy-mm6fw 1/1 Running 0 7d12h
What is strange is the amount of retries that occurs in the api server pod, it takes plenty of retries and a very long to become ready. Do you what might be the cause for all of the retries or is this normal? Further I have the output below when adding the users to the additionalConfigs:

kubectl get po -A -n clearml NAMESPACE NAME READY STATUS RESTARTS AGE ambassador ambassador-5cfbb8d4c8-8792m 1/1 Running 1 42h ambassador ambassador-5cfbb8d4c8-fvz6s 1/1 Running 0 42h ambassador ambassador-5cfbb8d4c8-hgbpb 1/1 Running 0 42h ambassador ambassador-agent-845dfc94c-zm79v 1/1 Running 0 42h ambassador ambassador-redis-64b7c668b9-bj2rg 1/1 Running 0 42h clearml clearml-elastic-master-0 1/1 Running 0 16h clearml clearml-server-agent-group-cpu-agent-5df4476cfc-wjblx 1/1 Running 40 16h clearml clearml-server-apiserver-5bffbb5ddb-t9c4k 1/1 Running 39 16h clearml clearml-server-apiserver-79f8d47585-9cbhx 0/1 CrashLoopBackOff 4 3m56s clearml clearml-server-fileserver-67bf58c47d-rg9hv 1/1 Running 0 16h clearml clearml-server-mongodb-86648c4756-ddlqm 1/1 Running 0 16h clearml clearml-server-redis-master-0 1/1 Running 0 16h clearml clearml-server-webserver-864b4b6868-vrddb 1/1 Running 9 16h kube-system aws-node-p4cqr 1/1 Running 0 7d12h kube-system aws-node-tk6tv 1/1 Running 0 7d12h kube-system coredns-745979c988-2rj7l 1/1 Running 0 7d16h kube-system coredns-745979c988-6cd5l 1/1 Running 0 7d16h kube-system kube-proxy-d9m94 1/1 Running 0 7d12h kube-system kube-proxy-mm6fw 1/1 Running 0 7d12h
And the following error is received inside the apiserver pod:

` for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/clearml/apiserver/server.py", line 6, in <module>
from apiserver.server_init.app_sequence import AppSequence
File "/opt/clearml/apiserver/server_init/app_sequence.py", line 10, in <module>
from apiserver.bll.statistics.stats_reporter import StatisticsReporter
File "/opt/clearml/apiserver/bll/statistics/stats_reporter.py", line 30, in <module>
worker_bll = WorkerBLL()
File "/opt/clearml/apiserver/bll/workers/init.py", line 38, in init
self.redis = redis or redman.connection("workers")
File "/opt/clearml/apiserver/redis_manager.py", line 80, in connection
obj.get("health")
File "/usr/local/lib/python3.6/site-packages/redis/client.py", line 1606, in get
return self.execute_command('GET', name)
File "/usr/local/lib/python3.6/site-packages/redis/client.py", line 898, in execute_command
conn = self.connection or pool.get_connection(command_name, **options)
File "/usr/local/lib/python3.6/site-packages/redis/connection.py", line 1192, in get_connection
connection.connect()
File "/usr/local/lib/python3.6/site-packages/redis/connection.py", line 563, in connect
raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error -2 connecting to clearml-server-redis-master:6379. Name or service not known.
Loading config from /opt/clearml/apiserver/config/default
Loading config from file /opt/clearml/apiserver/config/default/apiserver.conf
Loading config from file /opt/clearml/apiserver/config/default/hosts.conf
Loading config from file /opt/clearml/apiserver/config/default/logging.conf
Loading config from file /opt/clearml/apiserver/config/default/secure.conf
Loading config from file /opt/clearml/apiserver/config/default/services/_mongo.conf
Loading config from file /opt/clearml/apiserver/config/default/services/auth.conf
Loading config from file /opt/clearml/apiserver/config/default/services/events.conf
Loading config from file /opt/clearml/apiserver/config/default/services/organization.conf
Loading config from file /opt/clearml/apiserver/config/default/services/projects.conf
Loading config from file /opt/clearml/apiserver/config/default/services/tasks.conf
Loading config from /opt/clearml/config
Loading config from file /opt/clearml/config/services.conf
Loading config from file /opt/clearml/config/..2022_03_25_07_55_37.521850254/services.conf `Thank you for the help, much appreciated!

  
  
Posted one year ago

Hi SuccessfulKoala55 and JuicyFox94 , thanks for all the help. Highly appreciate it. I have since changed the values.yaml file with the above configuration and the upgrade with helm upgrade fails. Here is are the logs of the apiserver pod:

` socket.SOCK_STREAM):
File "/usr/lib64/python3.6/socket.py", line 745, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/clearml/apiserver/server.py", line 6, in <module>
from apiserver.server_init.app_sequence import AppSequence
File "/opt/clearml/apiserver/server_init/app_sequence.py", line 18, in <module>
from apiserver.mongo.initialize import (
File "/opt/clearml/apiserver/mongo/initialize/init.py", line 9, in <module>
from .pre_populate import PrePopulate
File "/opt/clearml/apiserver/mongo/initialize/pre_populate.py", line 33, in <module>
from apiserver.bll.event import EventBLL
File "/opt/clearml/apiserver/bll/event/init.py", line 1, in <module>
from .event_bll import EventBLL
File "/opt/clearml/apiserver/bll/event/event_bll.py", line 34, in <module>
from apiserver.bll.task import TaskBLL
File "/opt/clearml/apiserver/bll/task/init.py", line 1, in <module>
from .task_bll import TaskBLL
File "/opt/clearml/apiserver/bll/task/task_bll.py", line 48, in <module>
org_bll = OrgBLL()
File "/opt/clearml/apiserver/bll/organization/init.py", line 21, in init
self.redis = redis or redman.connection("apiserver")
File "/opt/clearml/apiserver/redis_manager.py", line 80, in connection
obj.get("health")
File "/usr/local/lib/python3.6/site-packages/redis/client.py", line 1606, in get
return self.execute_command('GET', name)
File "/usr/local/lib/python3.6/site-packages/redis/client.py", line 898, in execute_command
conn = self.connection or pool.get_connection(command_name, **options)
File "/usr/local/lib/python3.6/site-packages/redis/connection.py", line 1192, in get_connection
connection.connect()
File "/usr/local/lib/python3.6/site-packages/redis/connection.py", line 563, in connect
raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error -2 connecting to clearml-server-redis-master:6379. Name or service not known. `
Do you know what might be the issue and if I can solve this from my side?

  
  
Posted one year ago

it looks to me redis pod is not working as expected, it’s just a guess

  
  
Posted one year ago

first I noticed a mistake I did when suggesting config, this:

  
  
Posted one year ago

can you post output of
kubectl get po -A -n clearmlpls?

  
  
Posted one year ago

it will be easier for me to reproduce

  
  
Posted one year ago

ColorfulBeetle67

  
  
Posted one year ago

this sounds weird to me

  
  
Posted one year ago

Will try to reproduce in next couple of hours, will give you feedback here asap

  
  
Posted one year ago
74 Views
26 Answers
one year ago
4 months ago
Tags