Hi SmugTurtle78 , I think you can set it up as follows (or something similar):pipe.add_step( name="stage_train", parents=["stage_process"], base_task_project="examples", base_task_name="Pipeline step 3 train model", parameter_override={"General/dataset_task_id": "${stage_process.id}"}, )
Note that in parameter_override I take a task id from a previous step and insert it into the configuration/parameters of the current step. Is that what you're looking for?
actually not exactly, The parameter I wanted to pass is not an input parameter of the parent task, I would like to save it as an artifact or something like that.
Dont sure if this a good solution for me since I wanted to add it as a paramater....
For example:
In dataset_creation utility I do this upload:task.upload_artifact('id_of_running_creation', artifact_object=id_of_running)
In Inference utility I do this upload:task.upload_artifact('id_of_running_inference', artifact_object=id_of_running)
and I would like to use them as a parameters to the next step of the pipeline run:
like this:pipe.add_step( name='post_processing',...., parameter_override= {"General/id_of_running_creation": "${dataset_creation.artifacts.id_of_running_creation}", "General/id_of_running_inference": "${inference.artifacts.id_of_running_inference}"}...
but it doesn't work since the artifact is a string ( I tried also with dataframe but then I cant change it to string before I execute the next step)
Well if you save it as an artifact, that artifact is accessible by other tasks and passable via the pipeline with monitor_artifacts
parameter in add_step()
https://clear.ml/docs/latest/docs/references/sdk/automation_controller_pipelinecontroller#add_step