Yeah, makes sense. We actually thought that the "best practice" would be to launch the "actual code" (as opposed to the pipeline controller) from agents. But obviously we were wrong, or at least it doesn't cover the fact that a lot of the time, code is being written for debugging. So yeah, that's where we're at, ATM
Oki doke 🙂 I'll see what the great powers of beyond (AKA, R&D folks) will have to say about that!
I was trying out the pipeline controller for the first time and I felt a bit of a burden that just for the sake of trying I had to launch an agent
WackyRabbit7 pipeline.start_locally() should do the trick I think.
demo code code:
` from clearml.automation import PipelineController
p = PipelineController()
p.start_locally() `
Is that what you had in mind?
If this includes scheduling through pipelines, in my opinion there should be an option to execute a pipeline without an agent. Sometimes for development I just want to execute a pipeline on my local machine just as I would a task...
First of all I wasn't aware that was an option - but I think it's preferable to be able to do it through the command line. Because I'm developing the pipeline to be executed remotely, but for debugging I run it locally.
Using what you showed I can obviously write it, and delete it once it is ready, and rewrite it when I'm debugging or adding features - but I think DX-wise it would be nicer to be able to trigger this functionality through the command line
Yeah I totally get what you're saying. Basically you want the same code to run locally or remotely, and something external would control whether it runs locally or enqueued to a worker. Am I right?