Hi everyone!
I have couple of questions regarding the Agent and spawned by it worker containers.
I have a 2 step pipeline:
pipe.add_function_step(
name="heavy_compute", function=heavy_compute,
function_kwargs=..., function_return=["results"],
repo=".", packages="requirements.txt",
)
pipe.add_function_step(
name="predict", function=predict,
function_kwargs=dict(results), function_return=["preds"],
repo=".", packages="requirements.txt"
)
pipe.start(queue="prediction-queue")
And I have ClearML server and Agent listening to the prediction-queue both running on the same machine (they share CPU and RAM, no GPU need).
My questions are:
- How to achieve advantage with such Agent configuration? Set up 2-3 Agents with dedicated CPU cores, listening to the shared queue?
- Is there way to distribute pipeline execution parts between agents aka conveyor? (Is it worth it?)
- How can i optimize containers spawned by agents to execute pipeline parts? Is it running container per every step of the pipeline?
Kind regards, Aleksei