 
			Reputation
Badges 1
113 × Eureka!What exactly are you trying to achieve ?
Let assume that you have Task.init() in  run.py
And  run.py  is inside  /foo/bar/
If you run :
cd /foo
python bar/run.py
Then the Task will have working folder  /foo
If you run:
cd /foo/bar
python run.py
Then your task will have the working folder  /foo/bar
ok, so if git commit or uncommit changes differ from previous run, then the cache is "invalidated" and the step will be run again ?
Nice ! That is handy !!
thanks !
Sorry I missed your message: no I don't know what happen when ES reach its RAM limit. We do self-host in Azure and use ES SaaS. Our cloud engineer manage that part.
My only experience was when I tried to spin up my local server, from docker compose, to test something and it took my PC down because ES eat all my RAM !!
I also have the same issue. Default argument are fine but all supplied argument in command line become duplicated !
Just keep in mind my your bottleneck will be the transfer rate. So mounting will not save you anything as you still need to transfer the whole dataset sooner or later to your GPU instance.
One solution is as Jake suggest. The other can be pre-download the data to your instance with a CPU only cheap instance type, then restart the instance with GPU.
or simply create a new venv in your local PC, then install your package with pip install from repo url and see if your file is deployed properly in that venv
so the issue is that for some reason, the  pip install  by the agent don't behave the same way as your local  pip install  ?
Have you tried to manually install your module_b with pip install inside the machine that is running clearml-agent ? Seeing your example, looks like you are even running inside docker ?
Nevermind: None
By default, the File Server is not secured even if Web Login Authentication has been configured. Using an object storage solution that has built-in security is recommended.
My bad
Ok I think I found the issue. I had to point the file server to azure storage:
api {
    # Notice: 'host' is the api server (default port 8008), not the web server.
    api_server: 
    web_server: 
    files_server: "
"
    credentials {"access_key": "REDACTED", "secret_key": "REDACTED"}
}
I think a proper screenshot of the full log with some information redacted is the way to go. Otherwise we are just guessing in the dark
got it working. I was using  CLEARML_AGENT_SKIP_PIP_VENV_INSTALL   .
now I just use  agent.package_manager.system_site_packages=true
Is it because Azure is "whitelisted" in our network ? Thus need a different certificate ?? And how do I provide 2 differents certificate ? Is bundling them simple as a concat of 2 pem file ?
@<1523701087100473344:profile|SuccessfulKoala55>  I managed to make this working by:
concat the existing OS ca bundle and zscaler certificate. And set  REQUESTS_CA_BUNDLE   to that bundle file
If you care about the local destination then you may want to use this None
have you try a different browser ?
Task.export_task()  will contains what you are looking for.
In this case  ['script']['diff']
I found that if pip is upgraded to latest version 25.0.1 then the package install fine.
The question become: why does the agent downgrade pip ?
Ignoring pip: markers 'python_version < "3.10"' don't match your environment
Collecting pip<22.3
  Downloading pip-22.2.2-py3-none-any.whl.metadata (4.2 kB)
Downloading pip-22.2.2-py3-none-any.whl (2.0 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.0/2.0 MB 3.9 MB/s eta 0:00:00
Installing collected packages: pip
  Attempting uninstall: pip
  ...is this mongodb type of filtering?
I really like how you make all this decoupled !! 🎉
I mean, what happen if I import and use function from another py file ? And that function code changes ?
Or you are expecting code should be frozen and only parameters changes between runs ?
you should know where your latest model is located then just call  task.upload_artifact  on that file ?
Yes. I am investigating that route now.
kind of ....
Now I think about it, the best approach would be to:
- Clone a task
Should I put that in the clearml.conf file?
I tried mounting azure storage account on that path and it worked: all files end up in the cloud storage

