I understand to from the agent, point of view, I just need to update the conf file to use new credential and new server address.
If the agent is the one running the experiment, very likely that your task will be killed.
And when the agent come back, immediately or later, probably nothing will happen. It won't resume ...
@<1523701205467926528:profile|AgitatedDove14>
What is the env var name for Azure Blob storage ? That the one we use for our Artifiact.
Also, is there function call rather than env var ?
It would be simplier in our case to call a function to set credential for clearml rather than fetch secret and set env var prior to running the python code.
If there is only the option of using env var, I am thinking fetchcing secrets and set env var from python, eg: os.environ["MY_VARIABLE"] = "hello" ...
I use CLEARML_AGENT_SKIP_PIP_VENV_INSTALL=/path/to/my/vemv/bin/python3.12 and it work for me
so in your case, in the clearml-agent conf, it contains multiple credential, each for different cloud storage that you potential use ?
I use ssh public key to access to our repo ... Never tried to provide credential to clearml itself (via clearml.conf ) so I cannot help much here ...
we are usign mmsegmentation by the way
what is the command you use to run clearml-agent ?
Are you talking about this: None
It seems to not doing anything aboout the database data ...
that format is correct as I can run pip install -r requirements.txt
using the exact same file
there is a whole discussion about it here: None
What about migrating existing expriment in the on prem server?
Please refer to here None
The doc need to be a bit clearer: one require a path and not just true/false
1.12.2 because some bug that make fastai lag 2x
1.8.1rc2 because it fix an annoying git clone bug
python library don't always use OS certificates ... typically, we have to set REQUESTS_CA_BUNDLE=/path/to/custom_ca_bundle_crt because requests ignore OS certificates
you an use a docker image that already have those packages and dependencies, then have clearml-agent running inside or launching the docker container
I also use this: None
Which can give more control
or simply create a new venv in your local PC, then install your package with pip install from repo url and see if your file is deployed properly in that venv
I did.
I am now redeploying to new container to be sure.
@<1523701087100473344:profile|SuccessfulKoala55> Yes, I am aware of that one. It build docker container ... I wanted to build without docker. Like when clearml-agent run in non-docker mode, it is already building the running env inside it caching folder structure. I was wondering if there was a way to stop that process just before it execute the task .py
all good. Just wanted to know in case I missed it
We are using this: WebApp: 2.2.0-690 • Server: 2.2.0-690 • API: 2.33
Not a solution, but just curious: why would you need that many "debug" images ?
Those are images automatically generated by your training code that ClearML automatically upload them. May be disable auto upload image during Task Init ?
Can you paste here what inside "Installed package" to double check ?
very hard to diagnose with this tiny bit of log ...