Reputation
Badges 1
49 × Eureka!heres is the log when executing with --foreground. but is there any difference?
I run clearml-agent manually in gpu available pod using command clearml-agent daemon --queue shelley
and this doesnât show gpu usage same with when i run task remotely
and here is the log
agent.worker_id =
agent.worker_name = shelley-gpu-pod
agent.force_git_ssh_protocol = false
agent.python_binary =
agent.package_manager.type = pip
agent.package_manager.pip_version.0 = <20.2 ; python_version < â3.10â
agent.package_manager.pip_version.1 = <22.3 ; python_ver...
it is working on on-premise machine(i can see gpu usage on WORKERS & QUEUES Dashboard). but it is not working on cloud pod
nope. just running âclearml-agent daemon --queue shelleyâ
can i hide some of them without fixing and rebuilding docker image?
@<1523701070390366208:profile|CostlyOstrich36> Hello. Oh, sorry for the lack of explanation.when i execute the command âclearml-session ~â, jupyter url format is â None :{local_jupyter_port}/?token={jupyter_token}â and vs code url format is just â None :{local_vscode_port}â like the pic i attached here. I wonder why vs code url doesnât have token.
i am having same issue: None
@<1523701087100473344:profile|SuccessfulKoala55> Okay..but how can i specify agentâs verison in helm chart?
i fount the solution!! i added configuration to helmâs values.yaml below.
additionalConfigs:
# services.conf: |
# tasks {
# non_responsive_tasks_watchdog {
# # In-progress tasks that havenât been updated for at least âvalueâ seconds will be stopped by the watchdog
# threshold_sec: 21000
# # Watchdog will sleep for this number of seconds after each cycle
# watch_interval_sec: 900
# }
# }
apiserver.co...
Are there other people experiencing the same issue as me?
This is clearml-agent helm chart values.yaml file i used to install
alright. thanks đ i hope that too.
@<1523701205467926528:profile|AgitatedDove14> @<1529271085315395584:profile|AmusedCat74> Hi guys đ
- I think that by default it uses the host network so it can take care of that, are you saying you added k8s integration ?-> Yes, i modified clearml-agent helm chart.
- âSSH allows access with passwordâ it is a very long random password, not sure I see a risk here, wdyt?-> Currently, when enqueueing a task, clearml-session generates a long random password for SSH and VS Code and...
itâs been working well until i removed virtualenv and recreated, then i reinstall only clearml and clearml-session
The clearml server I installed is a self-hosted server, and developers log in using a fixed ID and password for authentication. Thatâs it!
Futhermore, to access ssh/vscode/jupyterlab directly without ssh tunneling, I modified the clearml-session script, and once I upload this script to the DevOps project in draft status, developers clone it to their own project. Then, they enqueue and wait for the command and URL to access ssh/vscode/jupyterlab, which will be displayed.