Do you mean to kill the clearml-agent process after the task finishes running? What is the use case I'm curious
Hi @<1540142651142049792:profile|BurlyHorse22> , it looks like an error in your code that is bringing the traceback. What is happening during the traceback?
Hi @<1535069219354316800:profile|PerplexedRaccoon19> , you can do it if you run in docker mode
What is the exact python version you're trying to run on?
Hi FancyWhale93 , do you have a snippet that reproduces this?
This is because Datasets have a new view now. Just under 'Projects' on the left bar you have a button for Datasets 🙂
Does your image have ssh installed? Can you run ssh from inside the container?
Hi QuaintJellyfish58 , you mean like through conda?
Hi ScantChimpanzee51 , I think you can get it via the API, this sits on task.data.output.destination
retrieve the task object via API and play with it a bit to see where this sits 🙂
AgitatedDove41 , this is the old documentation indeed. It's deprecated and shouldn't be used 🙂
New documentation has everything in it, and more!
Hi AdventurousButterfly15 , are you able to clone locally? What version of the agent are you using
Hi @<1523701062857396224:profile|AttractiveShrimp45> , can you please share some screenshots of what you see and also share a code snippet of what reproduces this behavior?
In that case you have the "packages" parameter for both the controller and the steps
Hmmmm you can automate the cleanup. Iterate through folders, if such an experiment exists, skip, if no experiment exists, delete folder
Hi SmugTurtle78 , I think you can set it up as follows (or something similar):pipe.add_step( name="stage_train", parents=["stage_process"], base_task_project="examples", base_task_name="Pipeline step 3 train model", parameter_override={"General/dataset_task_id": "${stage_process.id}"}, )
Note that in parameter_override I take a task id from a previous step and insert it into the configuration/parameters of the current step. Is that what you're looking for?
Hi GrittyCormorant73 ,
How are you scheduling a task now using TaskScheduler?
Hmmm this is strange. Still not working for you?
Pipeline is a unique type of task, so it should detect it without issue
Pipelines have id's, you can try using a pipeline ID. I think it should work
Hi @<1524560082761682944:profile|MammothParrot39> , can you please elaborate on exactly what you did?
Did you go into the task view of pipeline step and changed it's name but then back in the pipelines view the name didn't update?
Hi @<1546303293918023680:profile|MiniatureRobin9> , if you use pipelines from decorator your can certainly set if statements to decide where/how to go
Hi @<1523701083040387072:profile|UnevenDolphin73> , I think you can play with the auto_connect_frameworks
parameter of Task.init()
None
What is being reported that isn't auto-logged?
Hi @<1574931886449364992:profile|JealousDove55> , as long as you're running python code I think ClearML can help you with logging, visibility & automation.
Can you elaborate a bit on your use case?
DistressedKoala73 , can you send me a code snippet to try and reproduce the issue please?
Edit clearml.conf
on the agent side and add the extra index url there - https://github.com/allegroai/clearml-agent/blob/master/docs/clearml.conf#L78
Hi GrittyCormorant73 ,
Did you define a single queue or multiples?