Hi, I found the problem using the example Martin gave. Apparently you cannot use pipe.start_locally()
at all when trying to clone the task and work completely remote (I thought it would treat the agent as local instead when I send it to a queue). It works with the combination of pipe.set_default_execution_queue('agent')
and pipe.start(queue = 'agent2(EC2)')
. However, must I really have two clearml-agents for complete automation? To the best of my knowledge, setting both the function above to the same queue will just cause an infinite queue. Is there no way to use only one worker for everything like start_locally(run_pipeline_steps_locally=True)
? For example, initially I thought if I use Task.enqueue(task = clone_task.id, queue_name= 'agent2(EC2)')
(cloning pipeline) and start_locally(run_pipeline_steps_locally=True)
(pipeline file), clearml will treat the agent2(EC2) as local instead.