Reputation
Badges 1
53 × Eureka!additionaConfigs is a section under apiserver
but again, they are infrastructural decisions that need to be taken before talking software
perfect, now the whole process is clear to me
if you already have data over there you may import it
clearml and agent are in the same namespace?
from / to /debug.ping
I guess the message may be mistaken. Pls share kubectl get svc of the namespace you installed clearml
should be possible to enable ipv6 (even without using it) on network layer to check if this is really the issue?
Ok, let’s try to deep dive into it, what is the Helm chart version used for this deployment?
In k8s there’s no services but just clearml-agent (k8sglue). you can set any definition you want for spawned pods in this section: https://github.com/allegroai/clearml-helm-charts/blob/503ab437adc5d4f9b7b1037e2af143d47da24048/charts/clearml-agent/values.yaml#L132
I don’t think it’s possible to setup queues in advance with any ClearML chart env var but I’m not 100% sure. SuccessfulKoala55 can you pls clarify this?
this is the PR: https://github.com/allegroai/clearml-helm-charts/pull/80 https://github.com/allegroai/clearml-helm-charts/pull/80 will merge it soon so agent chart 1.0.1 will be released
apiserver additionalConfigs
k8s cluster can access ubuntu archive?
internal migrations are directly done by apiserver on startup
this is a clear issue with provisioner not handling the pvc request for any pod having a pvc. It’s not related chart but provisioner you are suing that probably doesn’t support dynamic allocation. what provisioner are you using?
` apiserver:
additionalConfigs:
services.conf: |
auth {
# Fixed users login credentials
# No other user will be able to login
fixed_users {
enabled: true
pass_hashed: false
users: [
{
username: "jane"
password: "12345678"
name: "Jane Doe"
},
{
username: "john"
password: "12345678"...
Hi PleasantGiraffe85 , just to get some more info; what version of chart are you using? Did you enabled ingress?
I think we can find a solution pretty quickly after some checks. Can you pls open an issue on new helm chart repo so I can take care of it in some day?
Sure, OddShrimp85 , until you need to specifically bind any pod to a node, nodeSelector is not needed. In fact, the new chart will leave to k8s the right to share the load on the worker nodes. About pvc you simply need to declare the Storageclass at k8s level so it can take care of creating the PVC too. How many workers do you have in your setup?
it can help debugging
this means network issues at some level
you will probably need a metrics-server on your k8s