Reputation
Badges 1
30 × Eureka!Hi SuccessfulKoala55 I want to trigger a retraining pipeline every set cadence of a few months.
SuccessfulKoala55 Thanks for letting me know. I'll immediately give this a try.
JitteryCoyote63 CostlyOstrich36 want to echo the statement that documentation should warn against the snap installation of docker.
I also want to highlight that the snap installation has major problems when it comes to volume mounting of priviliged paths etc. On my baremetal instance, even the docker-compose up started having problems which was resolved only when I switched to the docker documentation's installation.
For other people's reference: None
@<1523701087100473344:profile|SuccessfulKoala55> I have also tried using the API reference listed None to unregister a worker. However, it goes unregistered for a few seconds and comes back up on the UI. The Post request returns with a 200 response and says that the job is done.
@<1523701205467926528:profile|AgitatedDove14> this worked and gave me what I exactly needed. Thanks.
Other information about the experiments are visible, yes.
I do see plots, yes.
Scalars, sometimes I do and sometimes I don't. I am not reporting any scalars manually in my experiments. Whenever I do see scalars, it has been the monitoring values of CPU & GPU like below.
@<1523701087100473344:profile|SuccessfulKoala55> for training experiments, I do see the scalars show the learning rate etc being picked up from the model training. They update real-time along with the scalars related to GPU & CPU monitoring that I showed above. However, the console doesn't show any sort of logs and just remains a blank screen
Hi @<1523701087100473344:profile|SuccessfulKoala55> this is what I see in the Settings page
- WebApp: 1.8.0-254 • Server: 1.8.0-254 • API: 2.22
Sure @<1523701435869433856:profile|SmugDolphin23> . Thanks for the quick response.
Let me know if you need more information from me. I can add that information in a git issue if you want to track it that way.
Thanks @<1523701070390366208:profile|CostlyOstrich36> happy to contribute to the community even if it is bug reports 🙂
Hi @<1523701070390366208:profile|CostlyOstrich36> , sorry to tag you directly. Is this something that you have clarity on? Our team currently has an exploration where we are trying to see how we can optimize the pipeline we've already defined for the edge device.
Great start! I'm not sure if I'll be able to commit time during the hackathon, but I'd like to help with this extension once I have a leaner period at work. I hope this will be open-source & accept contributions @<1541954607595393024:profile|BattyCrocodile47>
Great question!
Actually I was wondering if ClearML had that integration possible from clearml-agent daemon requesting for temporary injection of required credentials from API-server which then requests Secret Manager. How does the Enteprrise version of ClearML currently do it when it has its own vault?
Hi @<1523701070390366208:profile|CostlyOstrich36> , I took a look at the CLI & SDK documentation for the Dataset class, but it didn't look like I had an option to control the preview. Am I looking at the wrong place? Apologies if I missed something from the documentation.
Thanks @<1523701070390366208:profile|CostlyOstrich36> . We were only changing this for each output_uri, but this makes our lives a bit easier. Thanks :)
@<1523701070390366208:profile|CostlyOstrich36> any thoughts?
- Yes, in this scenario both the Agent & the code were present in the same machine
- The queue being assigned to default was something we had changed after some debugging, yes.
- We verified from the ClearML UI that the queue that the task is being assigned to wasn't default.
- The pipeline only worked through remote execution when the entrypoint script was in the root of the git repo (which kept getting picked up as the working directory)
This is on a self-hosted instance of ClearML running on docker-compose as a PoC
Is there a way I can auto-increment the iteration value instead of specifying an integer? SuccessfulKoala55
What is weird is that (I think?) it worked Friday when I used it for another model object. Not sure what is happening here.
It says that the environment setup is completed, and it has cloned the project as well, but why is it facing an issue here?
hey CostlyOstrich36 I have shared the log file that prints out device IDs, credentials, endpoints, etc on personal chat to avoid sharing anything accidentally that I might not have identified as a security problem. I have retracted most of those things, but just wanted to be sure.
Hey CostlyOstrich36 , do you want me to share a specific part of the execution section? There is quite a bit of content hidden under scroll.
For the people who are going to visit this thread later, what I found works is basically providing all the defining parameters of the clearml worker and then passing the arg --stop in the command.
Ex: sudo clearml-agent daemon --detached --queue gpu_default gpu_priority --gpus 0 --docker --stop
What didn't work for me: sudo clearml-agent daemon --stop <worker-id>
Hey @<1523701087100473344:profile|SuccessfulKoala55> that was actually my first approach. I used the command clearml-agent daemon --stop <worker-id>
However, this is what I saw:
@<1523701087100473344:profile|SuccessfulKoala55> What would be the recommended command to delete/unregister an agent? None of the CLI commands seemed relevant to this operation other than the clearml-agent daemon --stop command that seems to only be stopping running instances of the agent and not unregistering it.
For more context here, I am using the docker compose self hosted version running on Linux systems.