Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Unanswered
How Can I Run A New Version Of A Pipeline, Wait For It To Finish And Then Check Its Completion/Failure Status? I Want To Kick Off The Pipeline And Then Check Completion


I "think" I have a clue on the issue that is lost here in the translation:
Specifically to me it all comes down to the definition of "pipeline"
From the clearml perspective:
Manual Task - code that is executed by the user (or any other mechanism Outside of the agent)
Remote Task - code that is executed by the Agent

Pipeline is a Task
Pipeline can be "manual task" but also "remote task"
Pipeline generates "remote tasks"
Task status (e.g. pipeline status as it is also a Task) can be: draft, aborted, completed, failed
Task can have multiple Tags. This means pipeline can also have multiple Tags.

Assume you have the following github action:
None
Image inside the compare_models.py file you have the following code:

@PipelineDecorator.component(execution_queue="1xgpu", return_values=['accuracy'], cache=True, task_type=TaskTypes.data_processing)
def test_model(model_id: str):
    print('compare model')
    return 0.42

@PipelineDecorator.pipeline(pipeline_execution_queue=None, name='custom pipeline logic', project='examples', version='0.0.5')
def executing_pipeline(model_ids, mock_parameter='mock'):
    accs = [test_model(i) for i in model_ids]
    # the casting will wait for the model comparison to complete, before we just launched the components
    accs = [int(c) for a in accs]
    print("best model is", max(accs))
    if max(accs) > 0.5:
        Task.current_task().add_tag("passed")

PipelineDecorator.set_default_execution_queue('default')
executing_pipeline(model_ids=['aa', 'bb', 'cc])

This means every time the git action is triggered, a new pipeline is created (i.e. if nothing changes it will be the same pipeline version with a diff instance). This new pipeline (including the code itself) is logged in ClearML (which means that later you can also manually execute it). If the accuracy is above a threshold we mark the pipeline (i.e. tag it) as passed.
Notice the test_model function will be executed on the "1xgpu" agents, not on the git action machine

With that in mind, what would you change to fit to your scenario ?

  
  
Posted one year ago
159 Views
0 Answers
one year ago
one year ago