Reputation
Badges 1
42 × Eureka!Hi AgitatedDove14 , I’m using clearml clearml-task to queue a task in a remote agent. The git remote URL is “ ssh://git@0.0.0.0:1234/path/to/repo.git ”, clearml https://github.com/allegroai/clearml/blob/aad01056b548660bb271c4f98447b715b8ba4c7d/clearml/backend_interface/task/repo/scriptinfo.py#L909 username from it (to cover cases like https://username@github.com/username/repository.git ), so the final URL is ssh://0.0.0.0:1234/path/to/repo.git , not ssh://git@0.0.0.0:1234/path/to/repo.g...
@<1523701070390366208:profile|CostlyOstrich36> yes, WebApp: 1.12.1-397 • Server: 1.12.1-397 • API: 2.26.
Docker version 28.0.1, build 068a01e (updated to this version few weeks ago).
@<1523701087100473344:profile|SuccessfulKoala55> yes, elastic is failed. don’t understand why
AgitatedDove14 we can read sys/fs/cgroup/memory/memory.limit_in_bytes to get the limit
https://faun.pub/understanding-docker-container-memory-limit-behavior-41add155236c
CostlyOstrich36 thank you! appreciate the quick response!
AgitatedDove14
Are you saying the second time this line is missing?
Yes.
Can you send the full Task log?
I will send the log in direct messages.
SuccessfulKoala55 yes, I have /usr/bin/python3.8, but it doesn’t help if I set it in agent.python_binary. python3.8 set as alternative #1 for python. but conda for some reason creating env with python3.6...
Executing Conda: /home/user/conda/bin/conda env remove -p /home/jovyan/.clearml/venvs-builds/3.6 --quiet --json
docker will Not actually limit the “vioew of the memory” it will just kill the container if you pass the memory limit, this is a limitation of docker runtime
it will only if oom killer is enabled
sorry, just found it)
AgitatedDove14
Specifically
/tmp/clearml_agent.ssh.rbw8o0t7
is the copy of the .ssh that the agent created, and now it is mounting it into the container
but why is it mounted only once? second and following containers do not mount the folder
RoundMosquito25 hi, any updates?
oh, should I use --cpu-only flag?
Hi CostlyOstrich36 , I can’t find any options for specifying multiple workers for one GPU. Do you mean just run this command twice?clearml-agent daemon --queue myqueue --gpus 0
When I updated the URL of the remote repository in my git client
SuperiorPanda77 did you just replace “remote” for the client?
My remote in git client is ok:
ssh://git @<address>:5109<repo_path>.git
so I don’t understand why and where it changes :(
task log
` task ca3ab0ce39aa436f9e656fff378a2c25 pulled from c39519fcfb3f4353808fd266d6100795 by worker v012-0:gpuGPU-0929fd0f-eff1-91f1-854e-9874599660c3
2022-12-12 16:32:21
Current configuration (clearml_agent v1.5.1, location: /tmp/.clearml_agent.guezjnez.cfg):
api.version = 1.5
api.verify_certificate = true
api.default_version = 1.5
api.http.max_req_size = 15728640
api.http.retries.total = 240
api.http.retries.connect = 240
api.http.retries.read = 240
api.http.retri...
Hi CostlyOstrich36
How are you mounting the credentials?
Is this also mounted into the docker itself?
as I wrote above, it is mounted automatically:'-v', '/tmp/clearml_agent.ssh.kqzj9sky:/root/.ssh
What version of
ClearML-Agent
are you using?
1.3.0
AgitatedDove14 the best option would be custom charts in Web UI, like in wandb: https://docs.wandb.ai/ref/app/features/custom-charts
But pdf is acceptable too.
sureprint(APIClient().tasks.get_all(["95db561a08304a1faac3aabcb117412e"]))
{‘id’: ‘95db561a08304a1faac3aabcb117412e’, ‘name’: ‘task’}
AgitatedDove14 for example let’s add to https://github.com/allegroai/clearml/blob/master/examples/frameworks/catboost/catboost_example.py second catboost model training:
` ...
catboost_model = CatBoostRegressor(iterations=iterations, verbose=False)
catboost_model2 = CatBoostRegressor(iterations=iterations+200, verbose=False)
...
catboost_model.fit(train_pool, eval_set=test_pool, verbose=True, plot=False, save_snapshot=True)
catboost_model2.fit(train_pool, eval_set=test_pool, verbose=True,...
AgitatedDove14 no, it’s not a request.
I have custom python class, that uses a lot of models from frameworks that supported by ClearML already. I want to enable auto reporting for all models by using command clearml_task.connect(my_custom_class_instance) , but it doesn’t work the way I need it to — there is the only one loss curve, because because this graph is redrawn every time a new instance starts training.
Is there any way to reporting all instances inside my custom class without ...
AgitatedDove14 done) btw, could you show me the place in the code where scalars are written? I want to make a hotfix
CostlyOstrich36 no, there is only task_id and name in response

