Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi Everyone! I Have A Question About The Pipeline Controller: I Would Like To Build A Ml Pipeline Similar To The One At

Hi everyone!

I have a question about the pipeline controller: I would like to build a ml pipeline similar to the one at https://allegro.ai/clearml/docs/docs/examples/pipeline/pipeline_controller.html with the difference that instead of manually calling python to execute the python scripts for each step I'd like to create each task from a single script.

So what I'm doing is, before instantiating the pipeline controller, I use several Task.create() to create a task for each one of the pipeline steps, giving as input the related scripts in a git repository. After this I can execute the pipeline without problems since the tasks are now present on the ClearML server. The problem is that on the ClearML UI, each time I run this pipeline script I have both 'draft' tasks for each step and also 'completed' tasks for each step. I would like to know if there is a I way to generate only the 'completed' tasks to keep the UI more clean and less confusing. I'm new to ClearML so please tell if I'm using a wrong approach or there could be an easier way.

Thank you 🙂

  
  
Posted 3 years ago
Votes Newest

Answers 4


Hi AgitatedDove14
I implemented the pipeline manually as you suggested. I also used task.wait_for_status() after each task.enqueue() so I was able to implement a full pipeline in one script. It seems to be working correctly. Thank you!

  
  
Posted 3 years ago

Hi LovelyHamster1
That is a good point, sine the Pipeline kind of assumes the task are already in the system, it clone them (leaving you with the original Draft Task).
I think we should add a flag to that pipeline that if the Task is in draft it will use it (instead of cloning it) Since it seems your pipeline is quite straight forward, I'm not sure you actually need the pipeline controller class, you can perform the entire thing manually, see example here: https://github.com/allegroai/clearml/blob/master/examples/automation/task_piping_example.py
WDYT?

  
  
Posted 3 years ago

If it can help understand, this is what I'm doing

  
  
Posted 3 years ago

LovelyHamster1 NICE! 👍

  
  
Posted 3 years ago
1K Views
4 Answers
3 years ago
one year ago
Tags
Similar posts