
Reputation
Badges 1
49 × Eureka!i am having same issue: None
@<1523701087100473344:profile|SuccessfulKoala55> I realized that this is not an issue with the cloud or on-premise environment. itâs working well on gke but not working on eks. here is the log when i run âclearml-agent daemon --queue ~â command on eks
root@shelley-gpu-pod:/# clearml-agent daemon --queue shelley3
/usr/local/lib/python3.8/dist-packages/requests/init.py:109: RequestsDependencyWarning: urllib3 (2.0.1) or chardet (None)/charset_normalizer (3.1.0) doesnât match a supported ve...
because clearml-agnet is not installed in my gke cluster
root@shelley-gpu-pod:/# clearml-agent daemon --queue shelley2 --foreground
/usr/local/lib/python3.8/dist-packages/requests/init.py:109: RequestsDependencyWarning: urllib3 (2.0.2) or chardet (None)/charset_normalizer (3.1.0) doesnât match a supported version!
warnings.warn(
Using environment access key CLEARML_API_ACCESS_KEY=ââ
Using environment secret key CLEARML_API_SECRET_KEY=********
Current configuration (clearml_agent v1.5.2, location: None):
agent.worker_id ...
It also shows on project detail page.
@<1523701087100473344:profile|SuccessfulKoala55> yes. It only occurs when running on the cloud. Itâs fine when running on-premises.
it is working on on-premise machine(i can see gpu usage on WORKERS & QUEUES Dashboard). but it is not working on cloud pod
nope. just running âclearml-agent daemon --queue shelleyâ
for more info, I set CLEARML_AGENT_UPDATE_VERSION=1.5.3rc2
` in agentk8sglue.basePodTemplate.env
alright. thanks đ i hope that too.
@<1523701205467926528:profile|AgitatedDove14> @<1529271085315395584:profile|AmusedCat74> Hi guys đ
- I think that by default it uses the host network so it can take care of that, are you saying you added k8s integration ?-> Yes, i modified clearml-agent helm chart.
- âSSH allows access with passwordâ it is a very long random password, not sure I see a risk here, wdyt?-> Currently, when enqueueing a task, clearml-session generates a long random password for SSH and VS Code and...
can i hide some of them without fixing and rebuilding docker image?
Hope clearml-session will be more developed as clearml-agent. cause it is so useful! đ
My issue: None
i understand the reason that clearml-session supports only cli is because of SSH. right? i thought it was easy to develop sdk. instead, i can use your recommendation
Wow i appreciate that đ
pod log is too long. would it be ok if i upload pod log file here??
This is clearml-agent helm chart values.yaml file i used to install
i fount the solution!! i added configuration to helmâs values.yaml below.
additionalConfigs:
# services.conf: |
# tasks {
# non_responsive_tasks_watchdog {
# # In-progress tasks that havenât been updated for at least âvalueâ seconds will be stopped by the watchdog
# threshold_sec: 21000
# # Watchdog will sleep for this number of seconds after each cycle
# watch_interval_sec: 900
# }
# }
apiserver.co...
Oh, it didnât generate conf file properly. I will try again
I set CLEARML_AGENT_UPDATE_VERSION=1.5.3rc2
` in agentk8sglue.basePodTemplate.env as i mentioned
I want to get task id, properties right after submitting clearml-session task
It seems that there is no way to add environments, so i customized charts and using it on my own.
hello CostlyOstrich36 unfortunately, i also did it to api server just in case. but didnât work
@<1523701087100473344:profile|SuccessfulKoala55> what is task log? you mean the pod log provisioned by clearml-agent? do you want me to show them?
Are there other people experiencing the same issue as me?