Reputation
Badges 1
3 × Eureka!Hi @<1523701435869433856:profile|SmugDolphin23> , would that mean that multiple pre_callback()s would have to be defined for every add_step, since every step would have different configs? Sorry if there's something I'm missing, I'm still not quite good at working with ClearML yet.
@<1523702000586330112:profile|FierceHamster54> That's probably a good idea yeah, my question is would PipelineDecorator still be okay if I have multiple iterations of certain steps? For example if I call
for i in range(5):
step_x(...)
And how would consolidating all these step_xs work?
Hi Jason, yes this can be done. Your pipeline code will look like this:
Execution of preprocessing task
for i in range(125):
Execution of data splitting and inference task(s); each of the 125 tasks have the same base task name but different names, e.g. name = "inference_task" + str(i)
<end loop>
ids = ["${inference_task_" + str(i) + ".id}" for i in range(125)]
Execution of aggregation task with the ids passed in as some part of parameter_override e.g. "General/inference_ids": '[' + ','.jo...
I see I see... I'll keep the decorator way to do it in mind; for these step configs, would it make sense if they are in the form of, for example {$pipeline.parameter} and {$step_1.id}? Or if not what is the way to go about referencing other steps?