Reputation
Badges 1
4 × Eureka!Hi @<1523701205467926528:profile|AgitatedDove14> . I got Task.get_task to work by using the name passed in pipe. add_step but not with the task_name set in Task.init of the data_processing.py file. I want to understand if there's a better way than just passing task_name to parameter_override? If not, then can I understand why pipeline has to override task_name with the add_step name?
main.py
prefix='Args/'
pipe.add_step(
name="process_dataset",
base_task_project=proj...
@<1523701205467926528:profile|AgitatedDove14> Thank you, ill try to do that !
Hey there @<1523701205467926528:profile|AgitatedDove14> . So essentially i have a task called "data_processing" that I run in my pipeline. I just want to access old artifacts(dataframe) of my "data_processing" task inside my current "data_processing" task and append new rows to it on my current run and save the updated dataframe. This was not an issue when i run my task alone but when i run it as a pipeline it seems like its not finding old runs of the task.
From what I understand while looking at the clearml UI is that pipelines don't exactly run under projects directly but under .pipeline
so it would look like MyProject/.pipelines/Pipeline Demo
.