I guess I'm just a bit confused by what the correct mental model is here. If I'm interpreting this correctly, I need to have essentially "template tasks" in my Experiments section whose sole purpose is to be copied for use in the Pipeline? When I'm setting up my Pipeline, I can't go "here are some brand new tasks, please run them", I have to go "please run existing task A with these modifications, then task B with these modifications, then task C with these modifications?" And when the pipeline is run, it will automagically modify repo
/ branch
on those branches to the correct values? Can I manually set these somewhere to be certain?
I looked into the decorator and add_function_step
options, but it seemed that they required modifications of our code to put all of the import
statements into the beginning of the wrapped functions to get namespaces initialized, which is not what we want to do with our existing scripts. I'm not sure the wrapped setup function will work either, import statements in the outer function won't propagate namespaces to functions it calls.
(To be fair though, I have not actually tried using the decorator. I was trying to get add_function_step
to work for a while, then ran into the above namespace issue, and switched back to using tasks)
As for the node, this confusing bit is that this is text from the docs which seems to suggest that the node will be fully initialized before the callback:
pre_execute_callback
(
Optional
[
Callable
[
[
PipelineController
,
PipelineController.Node
,
dict
]
,
bool
]
]
noqa
) – Callback function, called when the step (Task) is created and before it is sent for execution. Allows a user to modify the Task before launch. Use node.job to access the ClearmlJob object, or node.job.task to directly access the Task object. parameters are the configuration arguments passed to the ClearmlJob.