Reputation
Badges 1
49 × Eureka!Hi @<1523701205467926528:profile|AgitatedDove14>
The server is already self hosted. I realized i canât create a report using clearml sdk. so i think i need to find other ways
then, is there any way to get embed code from scalars?
It seems that there is no way to add environments, so i customized charts and using it on my own.
The clearml server I installed is a self-hosted server, and developers log in using a fixed ID and password for authentication. Thatâs it!
Futhermore, to access ssh/vscode/jupyterlab directly without ssh tunneling, I modified the clearml-session script, and once I upload this script to the DevOps project in draft status, developers clone it to their own project. Then, they enqueue and wait for the command and URL to access ssh/vscode/jupyterlab, which will be displayed.
i understand the reason that clearml-session supports only cli is because of SSH. right? i thought it was easy to develop sdk. instead, i can use your recommendation
root@shelley-gpu-pod:/# clearml-agent daemon --queue shelley2 --foreground
/usr/local/lib/python3.8/dist-packages/requests/init.py:109: RequestsDependencyWarning: urllib3 (2.0.2) or chardet (None)/charset_normalizer (3.1.0) doesnât match a supported version!
warnings.warn(
Using environment access key CLEARML_API_ACCESS_KEY=ââ
Using environment secret key CLEARML_API_SECRET_KEY=********
Current configuration (clearml_agent v1.5.2, location: None):
agent.worker_id ...
i am having same issue: None
here is the agent, task log file~!
I tried the suggestion you mentioned, but itâs the same. And it doesnât seem to be an AMI issue. The same problem is occurring even in an on-premise environment.
Hi again đ @<1523701087100473344:profile|SuccessfulKoala55> sure!
Oh, Itâs not the issue with eks.. We had the same issue on an on-premise cluster too(clearml-agent is installed). Could it be because of clearml-agent installed?
@<1523701087100473344:profile|SuccessfulKoala55> I realized that this is not an issue with the cloud or on-premise environment. itâs working well on gke but not working on eks. here is the log when i run âclearml-agent daemon --queue ~â command on eks
root@shelley-gpu-pod:/# clearml-agent daemon --queue shelley3
/usr/local/lib/python3.8/dist-packages/requests/init.py:109: RequestsDependencyWarning: urllib3 (2.0.1) or chardet (None)/charset_normalizer (3.1.0) doesnât match a supported ve...
I run clearml-agent manually in gpu available pod using command clearml-agent daemon --queue shelley
and this doesnât show gpu usage same with when i run task remotely
and here is the log
agent.worker_id =
agent.worker_name = shelley-gpu-pod
agent.force_git_ssh_protocol = false
agent.python_binary =
agent.package_manager.type = pip
agent.package_manager.pip_version.0 = <20.2 ; python_version < â3.10â
agent.package_manager.pip_version.1 = <22.3 ; python_ver...
I want to get task id, properties right after submitting clearml-session task
This is clearml-agent helm chart values.yaml file i used to install
Hope clearml-session will be more developed as clearml-agent. cause it is so useful! đ
My issue: None
Wow i appreciate that đ
It also shows on project detail page.
Are there other people experiencing the same issue as me?
@<1523701205467926528:profile|AgitatedDove14> Good! I will try it
heres is the log when executing with --foreground. but is there any difference?
for more info, I set CLEARML_AGENT_UPDATE_VERSION=1.5.3rc2
` in agentk8sglue.basePodTemplate.env
nope. just running âclearml-agent daemon --queue shelleyâ
@<1523701087100473344:profile|SuccessfulKoala55> what is task log? you mean the pod log provisioned by clearml-agent? do you want me to show them?
I tried using K8S_GLUE_POD_AGENT_INSTALL_ARGS=1.5.3rc2
instead of CLEARML_AGENT_UPDATE_VERSION=1.5.3rc2
, but itâs same. doesnât read gpu usage.. đĽ˛
I set CLEARML_AGENT_UPDATE_VERSION=1.5.3rc2
` in agentk8sglue.basePodTemplate.env as i mentioned
Oh, it didnât generate conf file properly. I will try again