Hi @<1523701295830011904:profile|CluelessFlamingo93> , I think you can also control the agent sampling rate (to sample queue every 10 or 20 seconds instead of 5 for example)
Hi @<1523701295830011904:profile|CluelessFlamingo93> , I would suggest leaving your details here:
None
Hi @<1523701868901961728:profile|ReassuredTiger98> , you can simply set up the token in clearml.conf of the agent so the agent will have the rights to clone. What do you think?
Hi @<1643060807786827776:profile|WorriedPeacock92> , you mean like this?
None
Hi @<1836213542399774720:profile|ConvincingDragonfly85> , I believe you're looking for the alias parameter of Dataset.get() - None
Hi @<1787653555927126016:profile|SoggyDuck67> , you can add it as column and then filter from there.
See related screenie 🙂
Part of the docker compose, there is a container with a special agent that works specifically for managing services for the system, like the pipelines controllers
I'm afraid it's not supported right now.
The daemon agent on the machine will have to be started differently and monitored differently - you're welcome to add a PR though 🙂
Hi EnormousCormorant39 ,
is there a way to enqueue the dataset
add
command on a worker
Can you please elaborate a bit on this? Do you want to create some sort of trigger action to add files to a dataset?
Looks like you're having issues connecting to the server through the SDK. Are you able to access the webUI? Is it a self hosted server?
Hi @<1523708920831414272:profile|SuperficialDolphin93> , I think this is what you're looking for
None
You can use scroll_id to scroll through the tasks. When you call tasks.get_all you will get a scroll_id back. Use that scroll_id in the following calls to go through the entire database. Considering you have only 2k tasks, you can cover this in 4 scrolls 🙂
Hi @<1806135344731525120:profile|GrumpyDog7> , I would personally go for the init script route. What part didn't work for you?
From where did you get the 9008/9080/9081 ports? I don't see them in the docker compose anywhere
The controller simply runs the logic of the pipeline and requires minimal resources. All the heavy computation happens on the nodes/machines running the steps
Hi @<1858681577442119680:profile|NonchalantCoral99> , I would suggest opening a new workspace with PRO, tying it to a generic email ( devop@company.com for example) so when people shift, you won't have a problem with emails.
Well not really
Please elaborate 🙂
Hi @<1547028074090991616:profile|ShaggySwan64> , how are you currently saving models? What framework are you using? Usually the output models are listed in the 'artifacts' section of a task and on the model side, there is the 'lineage' tab to see which task created the model and what other tasks are using it as input.
Hi @<1523701949617147904:profile|PricklyRaven28> , are you using the docker argument and it's not working? Are you sure the agent is running in docker mode?
Hi @<1523708602928336896:profile|HungryArcticwolf62> , can you share an isolated code snippet that reproduces this? What version of the agent are you using now?
Also, what do you mean by skipped? What happens to the pipeline?
In the UI check under the execution tab in the experiment view then scroll to the bottom - You will have a field called "OUTPUT" what is in there? Select an experiment that is giving you trouble?
Hi @<1547028074090991616:profile|ShaggySwan64> , can you please provide minimal sample code that reproduces this? The local imports - are they from the private repo?
I think the controller and steps need to be in the same repository
Same repo as the private repo?
Hi @<1523701949617147904:profile|PricklyRaven28> , note that steps in a pipeline are special tasks with hidden system tag, I think you might want to enable that in your search
Hi ProudElephant77 , you will need to install the agents on that machine. The ClearML server doesn't assume any GPU capabilities since it is only the control plane for ClearML
Hi PricklyRaven28 , can you try with the latest clearml version? 1.7.1