Reputation
Badges 1
54 × Eureka!Youâre saying thereâs a built-in scheduler? SuccessfulKoala55
If so where can I find it?
For your second question, those are generated using custom tooling, it relies on the build system to be setup which is guaranteed by the docker image used. So I donât think this is a case of supporting a specific env setup or build tool but just allowing custom script for env setup step / building code.
WDYT?
I donât mean a serving endpoint, just the equivalent of âcloning an experimentâ and running it on a different (larger) dataset.
AgitatedDove14 it was executed with Python 3 and Iâm running in venv mode.
$ python --version Python 3.6.8 $ python repo/toy_workflow.py --logtostderr --logtoclearml --clearml_queue=ada_manual_jobs 2021-08-07 04:04:16,844 - clearml - WARNING - Switching to remote execution, output log page https://...
On the webpage logs I see this:2021-08-07 04:04:12 ClearML Task: created new task id=f1092bcbe30249639122a49a9b3f9145 ClearML results page:
`
2021-08-07 04:04:14
ClearML Monitor: GPU monitoring failed getting GPU reading, switching off GPU monitoring
2021-08...
Issue seems fixed now, thanks! Is the fact that clearml-agent needs to be installed from system python mentioned anywhere in the docs, if not I suggest it gets added.
Thank you so much for helping.
OH! I was installing it on an env
EagerOtter28 Iâm running into a similar situation as you.
I think you could use --standalone-mode
and do the cloning yourself in the docker bash script that you can configure in the agent config.
That wonât work đ
The docker shell script runs too early in the process.
I want to inject a bash command after the repo has been clone (and maybe even after the venv has been installed).
So when the repo is cloned and venv is created and activated I want to executed this from the repo: tools/setup_dependencies.sh
I know this is not the default behavior so Iâd be happy with having the option to override the repo when I call execute_remotely
This is exactly what I was looking for. I thought once you call execute_remotely
the task is sent and itâs too late to change anything.
Fixed it by adding this code block. Makes sense.if clone: task = Task.clone(self) else: task = self # check if the server supports enqueueing aborted/stopped Tasks if Session.check_min_api_server_version('2.13'): self.mark_stopped(force=True) else: self.reset()
AgitatedDove14 wouldnât the above command task.execute_remotely(queue_name=None, clone=False, exit_process=False)
fail becauseclone==False and exit_process==False is not supported. Task enqueuing itself must exit the process afterwards.
I thought it worked earlier đŽ
I do expect it to pip
install though which doesnât root access I think
My docker image will have all required apt
packages, so no need.
I already have that set to true and want that behavior. The issue is on the âcommittedâ change set. When I push code to github I push to my fork and pull from the main/master repo (all changes go through PRs from fork to main).
Now when I use execute_remotely
, whatever code does the git discovery, considers whatever repo I pull
from the repo to use. But these changes havenât necessarily been merged into main. The correct behavior would be to use the forked repo.
Is it possible to set that at task enqueueing SuccessfulKoala55 ?
Being able to create and remove queues as well as list their contents.
Great find! So a pip upgrade should fix it hopefully.
Well this doesnât workpip install -e
Is there a way to make it use ssh+git
instead of git+git
? Maybe add a force_ssh_pip_install
to the agent config?
I am already forcing ssh auth
It doesnât install it automatically, I think I need to specify it somewhere, see the above error. Or am I misunderstanding?
The commit is valid for sure.
The private_package
can be installed by doing pip install
git+ssh://git@github.com/user/private_package.git but the agent is trying to do pip install private_package
which wonât work.
If venv works inside containers thatâs even better. We actually have custom containers that build on master merges. I wonder if using our own containers which should have most the deps will work better than a simpler container.
It is indeed autopopulated by init
... more-itertools==8.6.0 -e git+git@github.com:user/private_package.git@57f382f51d124299788544b3e7afa11c4cba2d1f#egg=private_package msgpack==1.0.2 msgpack-numpy==0.4.7.1 ...