Badges 1109 × Eureka!
is there a way to prevent from creating a new setup in my worker each time?
if I launch the same script in GCP, (I don't run it as a clearml-agent), then everything works fine
but it still saves using the output_uri from the server that created the task
and in the script I removed the
output_uri= , in the task initialization
since it didn't change (the packages installed)
if I put
~/clearml in the
default_output_uri key, and start the task, when run as agent in GCP I get
clearml.Task - INFO - Completed model upload to file:///$github_proj_directory/~/clearml/$proj_name/$experiment_name
no, only in the clearml.conf file
Now I removed the output_uri in the conf file of the machine that started the task, and when I run it as agent in GCP it works.
Is this a bug?
even though I have set the default_output_uri in GCPs conf file
this is the one in GCP
so with these two configurations, and no output_uri in the task creation in the script:
I get model saved model in tl2 and in GCP (when run as agent):
do I create an issue for this as well? SuccessfulKoala55 ?
this is the one in tl2
ok, ran the script, and had the same issue..I detected another bug I think..going to put it outside the thread
let me run the model_upload example in your repo instead of my script
if I write:
in tl2 conf :
in GCP it saves in that same dir
now I have:
My setting is the following:
-run script in tl2 (local server)
-clone task and enque it, run it in GCP
Ideally I would like to:
if the script in run in tl2 it should save to local filesystem if the script is run in GCP it should save to GS
mmm..I'm having the same issue:
in my GCP agent:
base) root@gst-cv-glema3-final-tf2-cu101:~# clearml-agent daemon --queue redness
Current configuration (clearml_agent v0.17.1, location: /root/clearml.conf):
sdk.storage.cache.default_base_dir = ~/.clearml/cache
Hi! were you able to reproduce the issue CostlyOstrich36 ?
Did you put anything inside
great, let me know if I can help you in any way. Thanks!
btw, I think this should be the output of
report_confusion_matrix ...what do you think?
clearml == 0.17.5rc5
google_cloud_storage == 1.36.1
joblib == 1.0.1
matplotlib == 3.3.4
numpy == 1.20.0
object_detection == 0.1
opencv_python_headless == 18.104.22.168
pandas == 1.2.3
scikit_learn == 0.24.1
tensorflow == 2.4.0
@ https://app.slack.com/team/U01J3C692M8 where you able to come up with a solution?
ah, I see..so I do it in master or in 0.17.5rc3?