 
			Reputation
Badges 1
183 × Eureka!Exactly!! That's what I was looking for: create the pipeline but not launching it. Thanks again AgitatedDove14
Mmm I see. So the agent is taking the parameters from the base task registered in the server. Then if I call  task.get_parameter_as_dict  for a task that has not been executed by an agent, should I get the original types of the values?
Ok! I'll try to spin up an agent with the  --service-mode  command and I will give you feedback
But I was actually asking about accessing the Pipeline task ID, not the tasks corresponding to the components.
Oh, I see. I guess somehow I can retrieve that information via  Task.logger  , since it is stored in JSON format? Thanks!
AgitatedDove14 BTW, I got the notification from GitHub telling me you had committed the fix and I went ahead. After testing the code again, I see the task parameter dictionary has been removed properly (now it has been broken down into flat parameters). However, I still have the same problem with duplicate tasks, as you can see in the image.
That' s right, I don't know why I was trying to make it so complicated 😅
Yes, when the parameters that are connected do not have nested dictionaries, everything works fine. The problem comes when I try to do something like this:
` from clearml import Task
task = Task.init(project_name="Examples", task_name="task with connected dict")
args = {}
args["period"] = {"start": "2020-01-01 00:00", "end": "2020-12-31 23:00"}
task.connect(args) `
and the clone task is like this:
` from clearml import Task
template_task = Task.get_task(task_id="<Your template task id>"...
Mmm I see. However I think that only the components used for that pipeline should be shown, as it may be the case that you have defined, say, 1000 components, and you only use 10 in a pipeline. I think that listing them all would just clutter up the results tab for that pipeline task
The scheme is similar to the following:
`                          main_pipeline
(PipelineDecorator.pipeline)
|
|----------------------------------|
|                                  |
inference_orchestrator_1         inference_orchestrator_2
(PipelineDecorator.component,     (PipelineDecorator.component,
acting as a pipeline)             acting as a pipeline)
|                                  |
...
In my use case I have a pipeline that executes inference tasks with several models simultaneously. Each inference task is actually a component that acts as a pipeline, since it executes the required steps to generate the predictions (dataset creation, preprocessing and prediction). For this, I'm using the new pipeline functionality ( PipelineDecorator )
AgitatedDove14  The pipelines are executed by the agents that are listening to the queue given by  pipeline_execution_queue="controllers"
Well, I am thinking in the case that there are several pipelines in the system and that when filtering a task by its name and project I can get several tasks. How could I build a filter for  Task.get_task(task_filter=...)  that returns only the task whose parent task is the pipeline task?
Okay! I'll keep an eye out for updates.
Mmmm you are right. Even if I had 1000 components spread in different project modules, only those components that are imported in the script where the pipeline is defined would be included in the DAG plot, is that right?
Hey AgitatedDove14 ! Any news on this? 🙂
SuccessfulKoala55 I have not tried yet with argparse, but maybe I will encounter the same problem
I don't know if you remember the need I had some time ago to launch the same pipeline through configuration. I've been thinking about it and I think PipelineController fits my needs better than PipelineDecorator in that respect.
That's right! run_locally() does just what I was expecting
Hi  SuccessfulKoala55
So, how can I get the ID of the requested project through the  resp  object? I tried with  resp["id"]  but it didn't work.
My guess is to manually read and parse the string that  clearml-agent list  returns, but I'm pretty sure there's a cleaner way to do it, isn't there?
AgitatedDove14  I have the strong feeling it must be an agent issue, because when I place  PipelineDecorator.run_locally()  before calling the pipeline, everything works perfectly. See:
AgitatedDove14 Exactly, I've run into the same problem
I'm totally agree with the pipelinecontroller/decorator part. Regarding the proposal for the component parameter, I also think it would be a good feature, although it might mislead the fact that there will be times when the pipeline will fail because it is an intrinsically crucial step, so it doesn't matter whether 'continue_pipeline_on_failure' is set to True or False. Anyway, I can't think a better way to deal with that right now.
Great, thank you very much TimelyPenguin76
Hi  AgitatedDove14
Using  task.get_parameters  I get the parameters in a dictionary, but the values are still of type 'string'. The quickest solution I can think of is parsing with  eval  built-in. wdyt?
AgitatedDove14  I ended up with two pipelines being executed until they completed the workflow but duplicating each of their steps. You can check it here:
https://clearml.slack.com/files/U02A5DGPMPU/F02SR3G9RDK/image.png