Do you mean the Python version that is installed on the clearml agent itself? Or do you mean the Python version available in tasks that will be run from the agent?
@<1710827340621156352:profile|HungryFrog27> have you installed the Nvidia gpu-operator to advertise GPUs to Kubernetes?
@<1734020208089108480:profile|WickedHare16> - please try configuring the cookieDomain
clearml:
cookieDomain: ""
You should set it as your base domain, example pixis.internal , without any api or files in front of it
Hi @<1523701907598610432:profile|ReassuredArcticwolf33> - Are you referring to the clearml helm chart or to the clearml-agent one?
Either case, the respective values.yaml file is self-documented and contains example. Here I am reporting an example for adding additional volumes and volume mounts to the apiserver component of the clearml chart:
apiserver:
# -- # Defines extra Kubernetes volumes to be attached to the pod.
additionalVolumes:
- name: ramdisk
empty...
I understand, I'd just like to make sure if that's the root issue and there's no other bug, and if so then you can think of how to automate it via API
Can you try with these values? For instance the changes are: not using clearmlConfig, not overriding the image and use default, not defining resources
agentk8sglue:
apiServerUrlReference:
clearmlcheckCertificate: false
createQueueIfNotExists: true
fileServerUrlReference:
queue: default
webServerUrlReference:
clearml:
agentk8sglueKey: 8888TMDLWYY7ZQJJ0I7R2X2RSP8XFT
agentk8sglueSecret: oNODbBkDGhcDscTENQyr-GM0cE8IO7xmpaPdqyfsfaWear...
So CLEARML8AGENT9KEY1234567890ABCD is the actual real value you are using?
Hey @<1523701304709353472:profile|OddShrimp85> - You can tweak the following section in the clearml-agent override values:
# -- Global parameters section
global:
# -- Images registry
imageRegistry: "docker.io"
# -- Private image registry configuration
imageCredentials:
# -- Use private authentication mode
enabled: true # <-- Set this to true
# -- Registry name
registry: docker.io
# -- Registry username
username: someone
# -- Registry password
password: pwd
# -- ...
So if you now run helm get values clearml-agent -n <NAMESPACE> where <NAMESPACE> is the value you have in the $NS variable, can you confirm this is the full and only output? Of course the $VARIABLES will have their real value
agentk8sglue:
# Try newer image version to fix Python 3.6 regex issue
image:
repository: allegroai/clearml-agent-k8s-base
tag: "1.25-1"
pullPolicy: Always
apiServerUrlReference: "http://$NODE_IP:30008"
fileServerUrlReference: "ht...
@<1736194540286513152:profile|DeliciousSeaturtle82> when you copy the folder on the new pod, it crashes almost instantly?
Hi @<1798162812862730240:profile|PreciousCentipede43> 🙂
- Regarding bypassing the IAP I am not sure. Could you elaborate a bit? Do you have some expected solution in mind?
- For exposing the interactive sessions you can use a LoadBalancer config as mentioned (if your cloud provider supports its configuration) or use a NodePort service type (making sure there is no firewall rules and you can access the defined ports on the Nodes). Exposing the sessions through an Ingress is supported in t...
Hey @<1743079861380976640:profile|HighKitten20> - Try to configure this section in the values override file for the Agent helm chart:
# -- Private image registry configuration
imageCredentials:
# -- Use private authentication mode
enabled: false
# -- If this is set, chart will not generate a secret but will use what is defined here
existingSecret: ""
# -- Registry name
registry: docker.io
# -- Registry username
username: someone
# -- Registry password
password: pwd...
In your last message, you are referring to pod security context and admission controllers enforcing some policies such as a read-only filesystem. Is that the case in your cluster?
Or was this some output of a GPT-like chat? If yes, please do not use LLMs to generate values for the helm installation as they're usually not providing a useful or real config
Hey @<1734020156465614848:profile|ClearKitten90> - You can try with the following in your ClearML Agent override helm values. Make sure to replace mygitusername and git-password
agentk8sglue:
basePodTemplate:
env:
# to setup access to private repo, setup secret with git credentials
- name: CLEARML_AGENT_GIT_USER
value: mygitusername
- name: CLEARML_AGENT_GIT_PASS
valueFrom:
secretKeyRef:
name: git-password
...
So, when the UI gets a debug image, it gets the URL for that image, which was created in runtime by the running SDK (by the Agent, in this case), so using the fileserver URL provided by the agent.
You will need to pass the external reference:
agentk8sglue:
fileServerUrlReference: "
"
and work around the self-signed cert. You could try mounting your custom certificates to the Agent using volumes and volumeMounts, storing your certificate in a configmap or similarly
Hi @<1752864322440138752:profile|GiddyDragonfly90> - Can you try with the last value you proposed, but use : to separate user and password in the string, like this:
externalServices:
elasticsearchConnectionString: '[{"scheme":"http","host":"elastic:toto@elasticsearch-es-http","port":9200}]'
I see, in the example you provided you used a comma , to separate username and password, I suggest trying to use a column :
It's a bit hard for me to provide support here with the additional layer of Argo.
I assume the server is working fine and you can open the clearml UI and log in, right? If yes, would it be possible to extract the Agent part only, out of Argo, and proceed installing it through standard helm?
Hey @<1726047624538099712:profile|WorriedSwan6> , the basePodTemplate sections configures the default base template for all pods spawned by the Agent.
If you don't want every Task (or Pod) to use the same requests/limits, one thing you could try is to set up multiple queues in the Agent.
Each queue can then have an override of the Pod template.
So, you can try removing the nvidia.com/gpu : "4" from the root basePodTemplate and add a section like this in ...
@<1669152726245707776:profile|ManiacalParrot65> could you please send your values file override for the Agent helm chart?
I think Mongo does not like for its db folder to be replaced like this in the running Pod.
You can try by turning off Mongo for a moment (scale it down to 0 replicas from the deployment), then create a one-time Pod (non-mongo, you can use an ubuntu image for example) mounting the same volume that Mongo was mounting, and try using this Pod to copy the db folder in the right place. When it's done, delete this Pod and scale back to 1 the Mongo deployment.
Wonderful - We do not have such feature planned for now, feel free to contribute 🙂
Hi @<1843461294267568128:profile|KindArcticwolf58> - How did you execute this task?
The k8s_scheduler queue is an internal queue, not intended to be used for enqueuing task. I see you have configured the Agent to watch the gpu queue. Please make sure to create a queue with the same name on the control plane from the UI and restart the Agent, then enqueue the Task on this queue.
Hey @<1734020208089108480:profile|WickedHare16> , could you please share your override values file for the clearml helm chart?
Oh no worries, I understand 😄
Sure, if you could share the whole values and configs you're using to run both the server and agent that would be useful.
Also what about other Pods from the ClearML server, are there any other crash or similar error referring to a read-only filesystem? Are the server and agent installed on the same K8s node?
Hello @<1523708147405950976:profile|AntsyElk37> 🙂
You are right, the spec.runtimeClassName field is not supported in the Agent at the moment, I'll work on your Pull Request ASAP.
Could you elaborate a bit about why you need Tasks Pods to specify the runtimeclass to use GPUs?
Usually, you'd need to specify a Pod's container with, for example, resources.limits.nvidia.com/gpu : 1 , and the Nvidia Device Plugin would itself assign the correct device to the container. Will that work?
I assume the key and secret values here are redacted values and not the actual ones, right?
Hi @<1798887585121046528:profile|WobblyFrog79> - Please try setting the environment variable CLEARML_K8S_GLUE_DEBUG=1 on the Agent
agentk8sglue:
extraEnvs:
- name: CLEARML_K8S_GLUE_DEBUG
value: "1"
This will make the Agent Pod print the rendered Task Pod template in the logs, so you can see it 🙂
Hey @<1734020208089108480:profile|WickedHare16> - Not 100% sure this is the issue, but I noticed a wrong configuration in your values.
You configured both these:
elasticsearch:
enabled: true
externalServices:
# -- Existing ElasticSearch connectionstring if elasticsearch.enabled is false (example in values.yaml)
elasticsearchConnectionString: "[{\"host\":\"es_hostname1\",\"port\":9200},{\"host\":\"es_hostname2\",\"port\":9200},{\"host\":\"es_hostname3\",\"port\":9200}]"
Pl...