Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Unanswered
Hi Everyone! Is There A Way I Can Get Task.Get_Task() To Work Without Using Task_Id When Running Tasks As A Pipeline ? Im Trying To Access Old Pipeline Runs/Artifacts On My Current Pipeline But


Hi @<1523701205467926528:profile|AgitatedDove14> . I got Task.get_task to work by using the name passed in pipe. add_step but not with the task_name set in Task.init of the data_processing.py file. I want to understand if there's a better way than just passing task_name to parameter_override? If not, then can I understand why pipeline has to override task_name with the add_step name?

main.py

prefix='Args/'
pipe.add_step(
    name="process_dataset",
    base_task_project=project_name,
    base_task_name="data_processing",
    parameter_override={} #removed parameters for code clarity
)

src/data_processing/data_processing.py

task_name='data_processing'
task = Task.init(project_name=project_name, 
task_name=task_name,task_type='data_processing')

#access the previous successful runs artifact
#this doesnt work when running as pipeline but works when run independently
previous_task = Task.get_task(
        project_name=project_name,
        task_name=task_name,
        task_filter={'status': ['completed']})
#this works when running on pipeline
previous_task = Task.get_task(
        project_name=project_name,
        task_name="process_dataset", #use "process_dataset" name from pipe
        task_filter={'status': ['completed']})
  
  
Posted 6 months ago
62 Views
0 Answers
6 months ago
6 months ago