Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Given I Want To Run A Task In A Pipeline Using A Base Task Id. One Of My Steps Just Finds The Latest Results To Use. I Want The Task To Output The Id Of The Results And The Next Step To Use It. How Would I Go About Doing This? Is There A Way To Pass Just

Given I want to run a task in a pipeline using a base task id. One of my steps just finds the latest results to use. I want the task to output the id of the results and the next step to use it. How would I go about doing this? Is there a way to pass just a string between tasks in pipeline?

  
  
Posted 2 years ago
Votes Newest

Answers 4


actually not exactly, The parameter I wanted to pass is not an input parameter of the parent task, I would like to save it as an artifact or something like that.

  
  
Posted 2 years ago

Well if you save it as an artifact, that artifact is accessible by other tasks and passable via the pipeline with monitor_artifacts parameter in add_step()
https://clear.ml/docs/latest/docs/references/sdk/automation_controller_pipelinecontroller#add_step

  
  
Posted 2 years ago

Dont sure if this a good solution for me since I wanted to add it as a paramater....
For example:
In dataset_creation utility I do this upload:
task.upload_artifact('id_of_running_creation', artifact_object=id_of_running)In Inference utility I do this upload:
task.upload_artifact('id_of_running_inference', artifact_object=id_of_running)and I would like to use them as a parameters to the next step of the pipeline run:
like this:
pipe.add_step( name='post_processing',...., parameter_override= {"General/id_of_running_creation": "${dataset_creation.artifacts.id_of_running_creation}", "General/id_of_running_inference": "${inference.artifacts.id_of_running_inference}"}...but it doesn't work since the artifact is a string ( I tried also with dataframe but then I cant change it to string before I execute the next step)

  
  
Posted 2 years ago

Hi SmugTurtle78 , I think you can set it up as follows (or something similar):
pipe.add_step( name="stage_train", parents=["stage_process"], base_task_project="examples", base_task_name="Pipeline step 3 train model", parameter_override={"General/dataset_task_id": "${stage_process.id}"}, )Note that in parameter_override I take a task id from a previous step and insert it into the configuration/parameters of the current step. Is that what you're looking for?

  
  
Posted 2 years ago