Hi @<1547028031053238272:profile|MassiveGoldfish6> , what version of clearml-serving do you have? Can you please add the full terminal outputs for better context?
Hi SpicyOtter88 , how are you adding the plots?
By default it will use the packages that were detected in the environment. You can override that default behaviour with this.
I see, maybe open a GitHub issue for this to follow up
Hi BoredBat47 , this happens only when you use the --foreground flag?
Is this the full error? What version of clearml-agent are you using? What OS are you on?
I've also suspected as much. I've asked the guys check out the credentials starting with TX4PW3O (What you provided). They managed to use the credentials successfully with errors.
Therefor it is a configuration issue.
Hi,
I can't seem to reproduce. The steps I try as follows:
Compare 2 experiments
Enough scalar graphs to have a scroll bar
Click on the eye to make some graphs disappear, they disappear but no empty spaces are shown. Can you maybe add a screenshot?
Hi @<1569496075083976704:profile|SweetShells3> , and how do you expect to control the contents of the file? Via the UI or to upload it and then run the pipeline?
CheerfulGorilla72 , I will take a look soon 🙂
@<1577468638728818688:profile|DelightfulArcticwolf22> , after checking internally with the guys I think you should have received an email from Tina about 9 days ago
Did you make all the required changes in the docker compose?
Hmmm, maybe you could save it as an env var. There isn't a 'default' server per say since you can deploy anywhere yourself. Regarding to check if it's alive, you can either check ping it with curl or check up on the docker status of the server 🙂
Just try as is first with this docker image + verify that the code can access cuda driver unrelated to the agent
Hi @<1610083503607648256:profile|DiminutiveToad80> , can you please add a full log of the run?
You have two queues, one for 1xGPU and the other for 2xGPU, two workers on the GPU machine are running with each listening to the relevant queue. Is that the setup?
Also, what GPUs are you running on that machine?
What is the command you used to run the agent?
unrelated to the agent itself
It is returned in queues.get_all. I'd suggest navigating to the webUI and checking what the webUI is sending to the server (It's all API calls) and replicating that in code with the APIClient
Can you please add the full log of the execution?
I think these are the relevant methods 🙂
https://clear.ml/docs/latest/docs/references/sdk/task#register_artifact
https://clear.ml/docs/latest/docs/references/sdk/task#unregister_artifact
And later you can use
https://clear.ml/docs/latest/docs/references/sdk/task#upload_artifact
When you have a finalized version of what you want
Hi @<1790915053747179520:profile|KindParrot86> , currently Slack alerts are available as an example for the OS - None
You can write an adapter for it to send emails instead of Slack alerts
Can you check the apiserver logs for any issues?
I mean what python version did you initially run it locally?
Do you mean to kill the clearml-agent process after the task finishes running? What is the use case I'm curious
Hi @<1614069770586427392:profile|FlutteringFrog26> , if I'm not mistaken ClearML doesn't support running from different repoes. You can only clone one code repository per task. Is there a specific reason these repoes are separate?
Hi FierceHamster54 , I'm taking a look 🙂
JuicyFox94 , can you please assist? 🙂