Hmm interesting, so like a callback?!
like https://github.com/allegroai/clearml/blob/bca9a6de3095f411ae5b766d00967535a13e8401/examples/pipeline/pipeline_from_tasks.py#L54-L55 pipe-step level callbacks? I guess that mechanism could serve. Where do these callbacks run? In the instantiating process? If so, that would work (since the callback function can be any code I wish, right?)
I might want to dispatch other jobs from within the same process.
This is actually something that you should not do with decorators as pipelines.
Is that also the case when https://github.com/allegroai/clearml/blob/bca9a6de3095f411ae5b766d00967535a13e8401/examples/pipeline/pipeline_from_functions.py ? i.e. that main should not go on to making another pipeline?
pipeline dispatch might be part of a ci/cd task
Then it should be clone-pipeline->enqueue into "services" queue -> wait for result?
I suppose that if pipeline clone + enqueue results in a run with the latest code / container images then that would serve the purpose I had in mind.
I'm still unclear on why the script needs to end execution after dispatch. It doesn't seem like the natural choice there but it sounds like there are ways of accomplishing the use cases I've thought up.