Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Unanswered
Hi, I Am Running Clearml Open Source Version On Eks Kubernetes And Trying To Set The Web Login Configurations As Described Here:


Hi JuicyFox94 I get the following output without adding the adding the users to the additionalConfigs:

kubectl get po -A -n clearml NAMESPACE NAME READY STATUS RESTARTS AGE ambassador ambassador-5cfbb8d4c8-8792m 1/1 Running 1 42h ambassador ambassador-5cfbb8d4c8-fvz6s 1/1 Running 0 42h ambassador ambassador-5cfbb8d4c8-hgbpb 1/1 Running 0 42h ambassador ambassador-agent-845dfc94c-zm79v 1/1 Running 0 42h ambassador ambassador-redis-64b7c668b9-bj2rg 1/1 Running 0 42h clearml clearml-elastic-master-0 1/1 Running 0 16h clearml clearml-server-agent-group-cpu-agent-5df4476cfc-wjblx 1/1 Running 40 16h clearml clearml-server-apiserver-5bffbb5ddb-t9c4k 1/1 Running 39 16h clearml clearml-server-fileserver-67bf58c47d-rg9hv 1/1 Running 0 16h clearml clearml-server-mongodb-86648c4756-ddlqm 1/1 Running 0 16h clearml clearml-server-redis-master-0 1/1 Running 0 16h clearml clearml-server-webserver-864b4b6868-vrddb 1/1 Running 9 16h kube-system aws-node-p4cqr 1/1 Running 0 7d12h kube-system aws-node-tk6tv 1/1 Running 0 7d12h kube-system coredns-745979c988-2rj7l 1/1 Running 0 7d15h kube-system coredns-745979c988-6cd5l 1/1 Running 0 7d15h kube-system kube-proxy-d9m94 1/1 Running 0 7d12h kube-system kube-proxy-mm6fw 1/1 Running 0 7d12h
What is strange is the amount of retries that occurs in the api server pod, it takes plenty of retries and a very long to become ready. Do you what might be the cause for all of the retries or is this normal? Further I have the output below when adding the users to the additionalConfigs:

kubectl get po -A -n clearml NAMESPACE NAME READY STATUS RESTARTS AGE ambassador ambassador-5cfbb8d4c8-8792m 1/1 Running 1 42h ambassador ambassador-5cfbb8d4c8-fvz6s 1/1 Running 0 42h ambassador ambassador-5cfbb8d4c8-hgbpb 1/1 Running 0 42h ambassador ambassador-agent-845dfc94c-zm79v 1/1 Running 0 42h ambassador ambassador-redis-64b7c668b9-bj2rg 1/1 Running 0 42h clearml clearml-elastic-master-0 1/1 Running 0 16h clearml clearml-server-agent-group-cpu-agent-5df4476cfc-wjblx 1/1 Running 40 16h clearml clearml-server-apiserver-5bffbb5ddb-t9c4k 1/1 Running 39 16h clearml clearml-server-apiserver-79f8d47585-9cbhx 0/1 CrashLoopBackOff 4 3m56s clearml clearml-server-fileserver-67bf58c47d-rg9hv 1/1 Running 0 16h clearml clearml-server-mongodb-86648c4756-ddlqm 1/1 Running 0 16h clearml clearml-server-redis-master-0 1/1 Running 0 16h clearml clearml-server-webserver-864b4b6868-vrddb 1/1 Running 9 16h kube-system aws-node-p4cqr 1/1 Running 0 7d12h kube-system aws-node-tk6tv 1/1 Running 0 7d12h kube-system coredns-745979c988-2rj7l 1/1 Running 0 7d16h kube-system coredns-745979c988-6cd5l 1/1 Running 0 7d16h kube-system kube-proxy-d9m94 1/1 Running 0 7d12h kube-system kube-proxy-mm6fw 1/1 Running 0 7d12h
And the following error is received inside the apiserver pod:

` for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/clearml/apiserver/server.py", line 6, in <module>
from apiserver.server_init.app_sequence import AppSequence
File "/opt/clearml/apiserver/server_init/app_sequence.py", line 10, in <module>
from apiserver.bll.statistics.stats_reporter import StatisticsReporter
File "/opt/clearml/apiserver/bll/statistics/stats_reporter.py", line 30, in <module>
worker_bll = WorkerBLL()
File "/opt/clearml/apiserver/bll/workers/init.py", line 38, in init
self.redis = redis or redman.connection("workers")
File "/opt/clearml/apiserver/redis_manager.py", line 80, in connection
obj.get("health")
File "/usr/local/lib/python3.6/site-packages/redis/client.py", line 1606, in get
return self.execute_command('GET', name)
File "/usr/local/lib/python3.6/site-packages/redis/client.py", line 898, in execute_command
conn = self.connection or pool.get_connection(command_name, **options)
File "/usr/local/lib/python3.6/site-packages/redis/connection.py", line 1192, in get_connection
connection.connect()
File "/usr/local/lib/python3.6/site-packages/redis/connection.py", line 563, in connect
raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error -2 connecting to clearml-server-redis-master:6379. Name or service not known.
Loading config from /opt/clearml/apiserver/config/default
Loading config from file /opt/clearml/apiserver/config/default/apiserver.conf
Loading config from file /opt/clearml/apiserver/config/default/hosts.conf
Loading config from file /opt/clearml/apiserver/config/default/logging.conf
Loading config from file /opt/clearml/apiserver/config/default/secure.conf
Loading config from file /opt/clearml/apiserver/config/default/services/_mongo.conf
Loading config from file /opt/clearml/apiserver/config/default/services/auth.conf
Loading config from file /opt/clearml/apiserver/config/default/services/events.conf
Loading config from file /opt/clearml/apiserver/config/default/services/organization.conf
Loading config from file /opt/clearml/apiserver/config/default/services/projects.conf
Loading config from file /opt/clearml/apiserver/config/default/services/tasks.conf
Loading config from /opt/clearml/config
Loading config from file /opt/clearml/config/services.conf
Loading config from file /opt/clearml/config/..2022_03_25_07_55_37.521850254/services.conf `Thank you for the help, much appreciated!

  
  
Posted 2 years ago
87 Views
0 Answers
2 years ago
one year ago