Hi LazyTurkey38
What do you mean the git repo is not recognized? When execute_remotely leaves you should see on the task a ref to the git repo with the exact commit ID you have locally pulled, do you see it under the Execution tab?
I know this is not the default behavior so I’d be happy with having the option to override the repo when I call execute_remotely
yey 🙂 notice that when executed by the agent the call execute_remotely
is skipped, and so does the If statement I added (since running_locally will return False when the process is executed by the agent)
This is exactly what I was looking for. I thought once you call execute_remotely
the task is sent and it’s too late to change anything.
LazyTurkey38 , ohh I think you are correct 😞
it should be:# patch the Task and actually send it for execution if Task.running_locally(): # this will verify all auto repo detection and python is done. task.close() # so that we can edit the task task.reset() # update the repo task.update_task(task_data={'script': {'branch': 'new_branch', 'repository': 'new_repo'}}) # now to actually enqueue the Task Task.enqueue(task, queue_name='default')
wdyt?
LazyTurkey38
The last part makes sense, not sure I get the "if clone", we are calling execute_remotely, so I'm assuming we do not need to clone ourselves, but send the current Task.
Other than that yes, makes sense (BTW, assuming you have upgraded the server >=1.0 you can just do mark_stopped, no need to reset
Fixed it by adding this code block. Makes sense.if clone: task = Task.clone(self) else: task = self # check if the server supports enqueueing aborted/stopped Tasks if Session.check_min_api_server_version('2.13'): self.mark_stopped(force=True) else: self.reset()
AgitatedDove14 wouldn’t the above command task.execute_remotely(queue_name=None, clone=False, exit_process=False)
fail becauseclone==False and exit_process==False is not supported. Task enqueuing itself must exit the process afterwards.
I thought it worked earlier 😮
I already have that set to true and want that behavior. The issue is on the “committed” change set. When I push code to github I push to my fork and pull from the main/master repo (all changes go through PRs from fork to main).
Now when I use execute_remotely
, whatever code does the git discovery, considers whatever repo I pull
from the repo to use. But these changes haven’t necessarily been merged into main. The correct behavior would be to use the forked repo.
AgitatedDove14 when I try this I getclearml.backend_interface.session.SendError: Action failed <400/110: tasks.enqueue/v1.0 (Invalid task status (Invalid status change): current_status=in_progress, new_status=queued)> (queue=e78d2fdf2d5140b6b5c6678338c532bb, task=95082c9174a04044b25253d724362ec1)
$ git remote -v fork git@github.com:salimmj/somerepo.git (fetch) fork git@github.com:salimmj/somerepo.git (push) origin git@github.com:mainuser/somerepo.git (fetch) origin git@github.com:mainuser/somerepo.git (push)
I want to keep the above setup, the remote branch that will track my local will be on fork
so it needs to pull from there. Currently it recognizes origin
so it doesn’t work because the agent then can’t find the commit.
But these changes haven’t necessarily been merged into main. The correct behavior would be to use the forked repo.
So I would expect the agent to pull from your fork, is that correct? is that what you want to happen ?
It recognizes the main repo, but I want it to push and pull from another one (my own forked repo). AgitatedDove14
Sure LazyTurkey38 here's a nice hack for that:
` # code here
task.execute_remotely(queue_name=None, clone=False, exit_process=False)
patch the Task and actually send it for execution
if Task.running_locally():
task.update_task(task_data={'script': {'branch': 'new_branch', 'repository': 'new_repo'}})
# now to actually enqueue the Task
Task.enqueue(task, queue_name='default') You can also clear the git diff by passing
"diff": "" `
wdyt?
I want to keep the above setup, the remote branch that will track my local will be on
fork
so it needs to pull from there. Currently it recognizes
origin
so it doesn’t work because the agent then can’t find the commit.
So you do not want to push the change set ?
You can basically add the entire change set (uncomitted changes) from the last pushed commit).
In your clearml.conf, set store_code_diff_from_remote: true
https://github.com/allegroai/clearml/blob/8708967a5ef4d8529a1a5ea417672e3ebbb258d7/docs/clearml.conf#L157
Will that solve the issue ?
(obviously you can manually change the commit/repo after you call execute_remotely, but I'm trying to find a solution that avoids that)