Reputation
Badges 1
183 × Eureka!I mean what should I write in a script to import the APIClient? (sorry if I'm not explaining myself properly 😅 )
Oddly enough I didn't run into this problem today 🤔 If it happens to me again, I'll return to this thread 🙂
Hi! Not really. It's rather random :/
Well, instead of plain functions or files I use components because I need some of those steps to run on one machine and some on another. And it works perfectly fine (ignoring some minor bugs like this one). So I'm actually inserting component-decorated functions into 'helper_functions' parameter
BTW I would really appreciate it if you let me know when you get it fixed 🙏
Okay! I'll keep an eye out for updates.
My idea is to take advantage of the capability of getting parameters connected to a task from another task to read the path where the artifacts are stored locally, so I don't have to define it again in each script corresponding to a different task.
Now it's okey. I have found a more intuitive way to get around. I was facing the classic 'xy' problem :)
Sure, it would be very intuitive if the command to stop an agent would be as easy as:clearml-agent daemon --stop AGENT_PID
But this path actually does not exist in my system, so how should I fix that?
Thanks to you for fixing it so quickly!
Great, thank you very much for the info! I just spotted the get_logger
classmethod. As for the initial question, that's just the behavior I expected!
Anyway, is there any way to retrieve the information stored in the RESULTS tab of ClearML Web UI?
That' s right, I don't know why I was trying to make it so complicated 😅
Sure! That definitely makes sense. Where can I specify callbacks in the PipelineDecorator
API?
Well, I am thinking in the case that there are several pipelines in the system and that when filtering a task by its name and project I can get several tasks. How could I build a filter for Task.get_task(task_filter=...)
that returns only the task whose parent task is the pipeline task?
When you said clearml-agent
initial setup are you talking about the agent section in the clearml.conf
or the CLI instructions? If it is the second case I am starting the agent with the basic command:clearml-agent daemon --queue default
Is there any other settings I should specify to the agent?
Mmmm you are right. Even if I had 1000 components spread in different project modules, only those components that are imported in the script where the pipeline is defined would be included in the DAG plot, is that right?
Well the 'state.json' file is actually removed after the exception is raised
But I was actually asking about accessing the Pipeline task ID, not the tasks corresponding to the components.
Okay, so the idea behind the new decorator is not to group all the defined steps under the same script so that they share the same environment, but rather to simplify the process of creating scripts for each step and avoid manually calling Task.init
on those scripts.
Regarding virtual environment creation from caching, I will keep running benchmarks (from what you say it might be due to high workload in the servers we use)
So far I've been unlucky in the attempt of clearml
recog...
Oh, I see. I guess somehow I can retrieve that information via Task.logger
, since it is stored in JSON format? Thanks!
I am aware of the option to enable virtual environment caching, but that is still very time consuming.
Mmm I see. However I think that only the components used for that pipeline should be shown, as it may be the case that you have defined, say, 1000 components, and you only use 10 in a pipeline. I think that listing them all would just clutter up the results tab for that pipeline task
Or perhaps the complementary scenario with a continue_on_failed_steps
parameter which may be a list containing only the steps that can be ignored in case of failure.
So great! It would be a feature that would make the work much easier instead of having to clone the task and launch it with different parameters. It could even be considered more pythonic. Do you have an immediate solution in mind to keep moving forward before the new release is ready? :)
I have found it is not possible to start a pipeline B after a pipeline A. Following the previous example, I have added one more pipeline to the script:
` from clearml import Task
from clearml.automation.controller import PipelineDecorator
@PipelineDecorator.component(return_values=["msg"], execution_queue="model_trainings")
def step_1(msg: str):
msg += "\nI've survived step 1!"
return msg
@PipelineDecorator.component(return_values=["msg"], execution_queue="model_trainings")
def st...