Reputation
Badges 1
49 × Eureka!Oh, it didnât generate conf file properly. I will try again
@<1523701205467926528:profile|AgitatedDove14> @<1529271085315395584:profile|AmusedCat74> Hi guys đ
- I think that by default it uses the host network so it can take care of that, are you saying you added k8s integration ?-> Yes, i modified clearml-agent helm chart.
- âSSH allows access with passwordâ it is a very long random password, not sure I see a risk here, wdyt?-> Currently, when enqueueing a task, clearml-session generates a long random password for SSH and VS Code and...
i understand the reason that clearml-session supports only cli is because of SSH. right? i thought it was easy to develop sdk. instead, i can use your recommendation
Hope clearml-session will be more developed as clearml-agent. cause it is so useful! đ
My issue: None
I want to get task id, properties right after submitting clearml-session task
pls also refer to None :)
can i hide some of them without fixing and rebuilding docker image?
alright. thanks đ i hope that too.
@<1523701087100473344:profile|SuccessfulKoala55> yes. It only occurs when running on the cloud. Itâs fine when running on-premises.
itâs been working well until i removed virtualenv and recreated, then i reinstall only clearml and clearml-session
I tried the suggestion you mentioned, but itâs the same. And it doesnât seem to be an AMI issue. The same problem is occurring even in an on-premise environment.
@<1523701070390366208:profile|CostlyOstrich36> Hello. Oh, sorry for the lack of explanation.when i execute the command âclearml-session ~â, jupyter url format is â None :{local_jupyter_port}/?token={jupyter_token}â and vs code url format is just â None :{local_vscode_port}â like the pic i attached here. I wonder why vs code url doesnât have token.
@<1523701205467926528:profile|AgitatedDove14> Good! I will try it
root@shelley-gpu-pod:/# clearml-agent daemon --queue shelley2 --foreground
/usr/local/lib/python3.8/dist-packages/requests/init.py:109: RequestsDependencyWarning: urllib3 (2.0.2) or chardet (None)/charset_normalizer (3.1.0) doesnât match a supported version!
warnings.warn(
Using environment access key CLEARML_API_ACCESS_KEY=ââ
Using environment secret key CLEARML_API_SECRET_KEY=********
Current configuration (clearml_agent v1.5.2, location: None):
agent.worker_id ...
Wow i appreciate that đ
hello CostlyOstrich36 unfortunately, i also did it to api server just in case. but didnât work
Iâm also curious if itâs available to bind the same GPU to multiple queues.
because clearml-agnet is not installed in my gke cluster
Oh, Itâs not the issue with eks.. We had the same issue on an on-premise cluster too(clearml-agent is installed). Could it be because of clearml-agent installed?
It also shows on project detail page.
I tried using K8S_GLUE_POD_AGENT_INSTALL_ARGS=1.5.3rc2 instead of CLEARML_AGENT_UPDATE_VERSION=1.5.3rc2 , but itâs same. doesnât read gpu usage.. đĽ˛
nope. just running âclearml-agent daemon --queue shelleyâ
heres is the log when executing with --foreground. but is there any difference?


