Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Unanswered
Hello, Does Anybody Here Have Much Experience In Creating Sub-Tasks Or Sub-Pipelines? I'M Not Sure The Concept Is Particularly Well Established But The Docs Mention:


the SDK is unable to see each of the nodes?

Exactly ! I mean I love the idea of "nested" component, but implementation wise this is not trivial, it will also hurt the ability of caching individual component. The workaround is to have all the "business logic" in the pipeline function itself, routing data between components is basically "free". The data does not actually go through the pipeline logic, it only passes reference (unless the pipeline logic actually tries to access the data object, then it will be downloaded). Make sense ?

That's exactly what I'm trying to do but perhaps in the wrong way. In the above snippet for example, I was trying to initialise both....

So in order to do that you have to have individual Pipeline B. i.e. an actual stand alone pipeline.
Then you can use the pipeline Task ID and clone / enqueue like with any other Task. Which means the pipeline logic will do something like:

pipeline_task = Task.clone(source_task="pipeline_b_task_id")
Task.enqueue(task=pipeline_task, queue_name="services")
# wait until completed
pipeline_task.wait_for_status()
# make sure we have all the latest data
pipeline_task.reload()
# do something
  
  
Posted one year ago
109 Views
0 Answers
one year ago
one year ago