it does! thanks! I thought I had to modify the scripts, but now I see that it does it with the parameter_override
MagnificentSeaurchin79
Do notice that the pipeline controller assumes you have an agent running
I missed that part! sorry
One question, what do I put in the task_id
in the step2 file?
because once it clones task of step1, the task_id changes, so it no longer points to the actual task that was run
FYI...I am able to run the three tasks by commenting he task.execute_remotely()
lines in each file
Oh task_id is the Task ID of step 2.
Basically the idea is, you run your code once (lets call it debugging / programming), that run creates a task in the system, the task stores the environment definition and the arguments used. Then you can clone that Task and launch it on another machine using the Agent (that basically will setup the environment based on the Task definition and will run your code with the new arguments). The Pipeline is basically doing that for you (i.e. cloning a task changing parameters and enqueuing the task for execution).
This means the base_task_id parameter is the Task that the step will clone and enqueue for execution. Make sense ?