even it's just a local image ? You need a docker repository even if it will only be local PC ?
you should be able to use as many agent as you want.
On the same or different queue
what about the log aroundwhen it try to actually clone your repo ?
normally, you should have a agent running behind a "services" queue, as part of your docker-compose. You just need to make sure that you populate the appropriate configuration on the Server (aka set the right environment variable for the docker services)
That agent will run as long as your self-hosted server is running
is task.add_requirements("requirements.txt")
redundant ?
Is ClearML always look for a requirements.txt
in the repo root ?
you will need to provide more context than that if you don't want the answer: Have you try to turn it off and back on again ?
I also use this: None
Which can give more control
How are you using the function update_output_model
?
just saw that repo: who are coder
? That not the vscode developer team is it ?
Feels like Docker, Kubernetes is more fit for that purpose ...
we are usign mmsegmentation by the way
What should I put in there? What is the syntax for git package?
So I tried:
import livsdk.livbatch
import clearml
clearml.Task.add_requirements("livsdk","
")
task = clearml.Task.init(project_name="hieu-test", task_name='base_config')
print("Done")
Which give me this list of Packages Installed:
# Python 3.10.10 (main, Mar 05 2023, 19:07:49) [GCC]
# Local modules found - skipping:
# livsdk == ../[REDACTED]/livsdk/__init__.py
Augmentor == 0.2.10
Pillow == 9.2.0
PyYAML == 6.0
albumentations == 1.2.1
azure_storage_blob == 12.1...
To me the whole point of having pipeline is to have a system that "know" previous state and make "smart" decision on what should run and what not. If it's just about if then else, then code already handle all that.
And what I struggle a bit is to find doc on how it determine the existing state and how it make decision what to run. thus the initial question
how does it work if I create my pipeline from code ? Does the task will get the git repo state when first run and use commit hash and uncommited changed as "signature" ?
thanks for all the pointer ! I will try to have a good play around
most of people probable wont even know what that do
following your example, if the seeds are hard coded in the code, then git hash will detect if changed happen and the step need to be run or not
We don't have a file server. The clearml conf have :sdk.development.default_output_uri="
None "
Should I get all the workers None
Then go through them and count how many is in my queue of interest ?
Ok I think I found the issue. I had to point the file server to azure storage:
api {
# Notice: 'host' is the api server (default port 8008), not the web server.
api_server:
web_server:
files_server: "
"
credentials {"access_key": "REDACTED", "secret_key": "REDACTED"}
}
I saw that page ... but nothing about number of worker of a queue .... or did I miss it ?
what is the difference between vscode via clearml-session and vscode via remote ssh extension ?
Just keep in mind my your bottleneck will be the transfer rate. So mounting will not save you anything as you still need to transfer the whole dataset sooner or later to your GPU instance.
One solution is as Jake suggest. The other can be pre-download the data to your instance with a CPU only cheap instance type, then restart the instance with GPU.
got it
Thanks @<1523701070390366208:profile|CostlyOstrich36>
or which worker is in a queue ...
have you try a different browser ?