Reputation
Badges 1
49 × Eureka!Wow i appreciate that đ
then, is there any way to get embed code from scalars?
for more info, I set CLEARML_AGENT_UPDATE_VERSION=1.5.3rc2
` in agentk8sglue.basePodTemplate.env
@<1523701087100473344:profile|SuccessfulKoala55> Okay..but how can i specify agentâs verison in helm chart?
i am having same issue: None
alright. thanks đ i hope that too.
Oh, it didnât generate conf file properly. I will try again
pls also refer to None :)
@<1523701087100473344:profile|SuccessfulKoala55> I realized that this is not an issue with the cloud or on-premise environment. itâs working well on gke but not working on eks. here is the log when i run âclearml-agent daemon --queue ~â command on eks
root@shelley-gpu-pod:/# clearml-agent daemon --queue shelley3
/usr/local/lib/python3.8/dist-packages/requests/init.py:109: RequestsDependencyWarning: urllib3 (2.0.1) or chardet (None)/charset_normalizer (3.1.0) doesnât match a supported ve...
i understand the reason that clearml-session supports only cli is because of SSH. right? i thought it was easy to develop sdk. instead, i can use your recommendation
Hi @<1523701205467926528:profile|AgitatedDove14>
The server is already self hosted. I realized i canât create a report using clearml sdk. so i think i need to find other ways
@<1523701205467926528:profile|AgitatedDove14> Good! I will try it
This is clearml-agent helm chart values.yaml file i used to install
because clearml-agnet is not installed in my gke cluster
It seems that there is no way to add environments, so i customized charts and using it on my own.
root@shelley-gpu-pod:/# clearml-agent daemon --queue shelley2 --foreground
/usr/local/lib/python3.8/dist-packages/requests/init.py:109: RequestsDependencyWarning: urllib3 (2.0.2) or chardet (None)/charset_normalizer (3.1.0) doesnât match a supported version!
warnings.warn(
Using environment access key CLEARML_API_ACCESS_KEY=ââ
Using environment secret key CLEARML_API_SECRET_KEY=********
Current configuration (clearml_agent v1.5.2, location: None):
agent.worker_id ...
hello CostlyOstrich36 unfortunately, i also did it to api server just in case. but didnât work
The clearml server I installed is a self-hosted server, and developers log in using a fixed ID and password for authentication. Thatâs it!
Futhermore, to access ssh/vscode/jupyterlab directly without ssh tunneling, I modified the clearml-session script, and once I upload this script to the DevOps project in draft status, developers clone it to their own project. Then, they enqueue and wait for the command and URL to access ssh/vscode/jupyterlab, which will be displayed.
pod log is too long. would it be ok if i upload pod log file here??
I tried the suggestion you mentioned, but itâs the same. And it doesnât seem to be an AMI issue. The same problem is occurring even in an on-premise environment.