I'm checking it out today and see if I can put up something
Can get the result now but failed with this
which client should I import for Client.queues ?
from clearml.backend_api.session.client import APIClient client = APIClient() result = client.queues.get_next_task(queue='queue_ID_here')
Seems to work for me (latest RC 1.1.5rc2)
I'm thinking roll out multiple experiments at once
Well it should work, make sure you see the Task "holds" all the information needed (under the execution tab). repo / uncommitted changes / python packages etc.
Then configure your agent (choose pip/conda/poetry as package managers), and spin it up (by default in venv/coda mode, or in docker mode)
Should work 🙂
I’ll try it tomorrow and let you know if there is anything wrong
Oh this is one line missing on the above code
I tried from clearml.backend_api.session import client no luck
So is there any tutorial on this topic
Dude, we just invented it 🙂
Any chance you feel like writing something in a github issue, so other users know how to do this ?
Guess I’ll need to implement job schedule myself
You have a scheduler, it will pull jobs from the queue by order, then run them one after the other (one at a time)
Yeah the ultimate goal I'm trying to achieve is to flexibly running tasks for example before running, could have a claim saying how many resources I can and the agent will run as soon as it find there are enough resources
Checkout Task.execute_remotely()
you can push it anywhere in your code, when execution get to it, If you are running without an agent it will stop the process and re-enqueue it to be executed remotely, on the remote machine the call itself becomes a noop,
I can comment it on the github issue
Yes please do 🙂
How can I do to help extend it?
How about a CLI tool, like what we have with "clearml-task" ?
This is so awesome
Thank you ! 😊
Guess my best chance is to check out the agent source code right?
Yeah the ultimate goal I'm trying to achieve is to flexibly running tasks for example before running, could have a claim saying how many resources I can and the agent will run as soon as it find there are enough resources
Do you think the local agent will be supported someday in the future?
We can take this ode sample and extent it. can't see any harm in that.
It will enable very easy to ran "sweeps" without any "real agent" installed.
I'm thinking roll out multiple experiments at once
You mean as multiple subprocesses, sure if you have the memory for it
Or can I enable agent in this kind of local mode?
Or can I enable agent in this kind of local mode?
You just built a local agent