Hi ImpressionableElk3 , I don't think there is currently a capability to assign/dedicate cores to the agent. They basically run on whatever is available.
If you have some mechanism to do it using docker then it can be used by the agent.
Hi everyone!
I have couple of questions regarding the Agent and spawned by it worker containers.
I have a 2 step pipeline:
pipe.add_function_step(
name="heavy_compute", function=heavy_compute,
function_kwargs=..., function_return=["results"],
repo=".", packages="requirements.txt",
)
pipe.add_function_step(
name="predict", function=predict,
function_kwargs=dict(results), function_return=["preds"],
repo=".", packages="requirements.txt"
)
pipe.start(queue="prediction-queue")
And I have ClearML server and Agent listening to the prediction-queue both running on the same machine (they share CPU and RAM, no GPU need).
My questions are:
Hi ImpressionableElk3 , I don't think there is currently a capability to assign/dedicate cores to the agent. They basically run on whatever is available.
If you have some mechanism to do it using docker then it can be used by the agent.