Reputation
Badges 1
100 × Eureka!Well the original task is run with my user
AgitatedDove14
The easiest example for such use case as I describe is for example trying to run the full pipeline but in this experiment I wish to try Batch Norm which I haven't used in the pre executed Task. How can I do that without running this Task by it's own? (Which is quite problematic for me since it runs as a part of a pipeline, therefore using DAG)
I will try that.
In addition, I've seen that the file location of a task is saved, does it mean that when rerunning said task (for example clone it and enqueue it) trains will search for the file in the stored location? Or will it clone the repo with the given commit id and use the relative path to find this file?
When you say I can still get race/starvation cases, you mean in the enterprise or regular version?
Yeah I understand that. But since overriding parameters of pre executed Tasks is possible, I was wondering if I could change the commit id to the current one as well.
What do you mean by execute remotely? (I didn't really understand this one from the docs)
I'm confused. Why would that matter what my local code is when trying to replicate an already ran experiment?
Also, between which files is the git diff performed? (I've seen the linediff --git a/.../run.py b/.../run.py
but I'm not sure what's a and what's b in this context)
I am aware this is the current behavior, but could it be changed to something more intelligent? 😇
Nevermind, you can find it in the apiserver.conf
If I'd be exact that's a trains agent task that creates in a new subprocess another trains agent task
I understand how this is problematic. This might require more thinking if you guys wish to support this.
Hmm, is there a way to do this via code? I wish to do that before running the Pipeline so each task it contains would be updated to latest branch
But maybe only one step in the dag is flawed and I want to continue the rest of the pipeline as usual (despite the branch of the flawed task).
I am not sure what you mean by automatic stopping flows, could you give an example?
Obviously I am working with my trains-server, as I can see the new pipeline task under the new project 😮
Oh, that seems right, how can I get the project id of the newly created project?
(The one that was created with initial task)
Is there a way to set this via a config file? like the docker compose yml?
Okay so in the end I've run it locally and it behaved as expected (no auto logging for matplotlib) but for trains agent it didn't work, it auto - logged it anyway. TimelyPenguin76
I do this:
` base_task = Task.create(project_name=self.regression_project_name,
task_name=BASE_TASKS[block_type][engine], task_type=task_type)
params = base_task.export_task()
Git repo
params['script']['repository'] = subprocess.check_output(['git', 'config', '--get', 'remote.origin.url'],
cwd=REPO_NAME).decode().strip()
Git commit
params['script']['version_num'] = subprocess.check_output(['git', 'rev-parse',...
Of course you can edit which parameters you like
Also, why there is a project id where there is a project name that exists? I don't even know how to display a project's id haha
That should do the trick, thanks 🙂
Since my servers have a shared file system, the init process tells me that the configuration file already exists. Can I tell it to place it in another location? GrumpyPenguin23
Brutal sudo reboot, the agent is not up anymore
SuccessfulKoala55 I found the temp files, they contain the supposedly worker id, which seems just fine
Yes. More exactly I'm gzip.open them but I don't believe it should matter