Ok, here it is 🙂 https://github.com/allegroai/clearml/issues/298
do I create an issue for this as well? SuccessfulKoala55 ?
ok, ran the script, and had the same issue..I detected another bug I think..going to put it outside the thread
let me run the model_upload example in your repo instead of my script
so with these two configurations, and no output_uri in the task creation in the script:
I get model saved model in tl2 and in GCP (when run as agent):
/home/tglema/git_repo/~/clearml/
Can I see the whole config file? (you can hide any sensitive info there, of course)
How exactly did you add it to the cleaml.conf
file?
I'm not sure...
in GCP it saves in that same dir
even though I have set the default_output_uri in GCPs conf file
You mean that in GCP it stores in "/home/tglema/clearml"
even though you've setdefault_
output_uri
: "gs://..."
in the clearml.conf
file on the GCP instance?
even though I have set the default_output_uri in GCPs conf file
if I write:
in tl2 conf : default_
output_uri
: "/home/tglema/clearml"
in GCP it saves in that same dir
now I have:
in tl2: default_
output_uri
: "~/clearml"
in GCP: default_
output_uri
: "gs://..."
Can I see the exact lines you added to the .conf
file?
My setting is the following:
-run script in tl2 (local server)
-clone task and enque it, run it in GCP
Ideally I would like to:
if the script in run in tl2 it should save to local filesystem if the script is run in GCP it should save to GS
I'm not sure I understand - can you share the different settings you used in you clearml.con
file (just put the relevant parts of the file here, if you can) and explain where should each setting point to?
if I put ~/clearml
in the default_output_uri
key, and start the task, when run as agent in GCP I get clearml.Task - INFO - Completed model upload to file:///$github_proj_directory/~/clearml/$proj_name/$experiment_name
no, only in the clearml.conf file
Now I removed the output_uri in the conf file of the machine that started the task, and when I run it as agent in GCP it works.
Is this a bug?
Do you have anything in the script itself related to output_uri
?
if I launch the same script in GCP, (I don't run it as a clearml-agent), then everything works fine
and in the script I removed the output_uri=
, in the task initialization
but it still saves using the output_uri from the server that created the task
mmm..I'm having the same issue:
in my GCP agent:
base) root@gst-cv-glema3-final-tf2-cu101:~# clearml-agent daemon --queue redness
Current configuration (clearml_agent v0.17.1, location: /root/clearml.conf):
sdk.storage.cache.default_base_dir = ~/.clearml/cache
sdk.development.default_output_uri =
since it didn't change (the packages installed)
is there a way to prevent from creating a new setup in my worker each time?