Reputation
Badges 1
109 × Eureka!so how do I make a PR? 😅
I don't have write access..
not sure, I'm using GCS not S3. Is download_folder doing something different than downloading all files inside the folder?
ok, ran the script, and had the same issue..I detected another bug I think..going to put it outside the thread
let me run the model_upload example in your repo instead of my script
clearml == 0.17.5rc5
google_cloud_storage == 1.36.1
joblib == 1.0.1
matplotlib == 3.3.4
numpy == 1.20.0
object_detection == 0.1
opencv_python_headless == 4.5.1.48
pandas == 1.2.3
scikit_learn == 0.24.1
tensorflow == 2.4.0
My setting is the following:
-run script in tl2 (local server)
-clone task and enque it, run it in GCP
Ideally I would like to:
if the script in run in tl2 it should save to local filesystem if the script is run in GCP it should save to GS
FYI...I am able to run the three tasks by commenting he task.execute_remotely() lines in each file
@ https://app.slack.com/team/U01J3C692M8 where you able to come up with a solution?
no, only in the clearml.conf file
Now I removed the output_uri in the conf file of the machine that started the task, and when I run it as agent in GCP it works.
Is this a bug?
no, to the current task
` from clearml import Task
import argparse
only create the task, we will actually execute it later
task = Task.init(project_name='examples', task_name='pipeline demo',
task_type=Task.TaskTypes.controller, reuse_last_task_id=False)
task.execute_remotely()
args = {'dataset_path' : ''}
task.connect(args, section='Args') `like this?
I don't see anything in the CONFIGURATION section:
where can I find more info about why it failed?
well let me try excecute one of your samples
I had to mask some parts 😁
it would be completed right after the upload
it's my error: I have tensorflow==2.2 in my venv, and added Task.add_requirements('tensorflow') which forces tensorflow==2.4:
Storing stdout and stderr log into [/tmp/.clearml_agent_out.kmqde7st.txt]
Traceback (most recent call last):
File "aicalibration/generate_tfrecord_pipeline.py", line 15, in <module>
task = Task.init(project_name='AI Calibration', task_name='Pipeline step 1 dataset artifact')
File "/home/username/.clearml/venvs-builds/3.7/lib/python3.7/site-packages/clearm...
so with these two configurations, and no output_uri in the task creation in the script:
I get model saved model in tl2 and in GCP (when run as agent):
/home/tglema/git_repo/~/clearml/
File "aicalibration/generate_tfrecord_pipeline.py", line 30, in <module>
task.upload_artifact('train_tfrecord', artifact_object=fn_train)
File "/home/usr_341317_ulta_com/.clearml/venvs-builds/3.7/lib/python3.7/site-packages/clearml/task.py", line 1484, in upload_artifact
auto_pickle=auto_pickle, preview=preview, wait_on_upload=wait_on_upload)
File "/home/usr_341317_ulta_com/.clearml/venvs-builds/3.7/lib/python3.7/site-packages/clearml/binding/artifacts.py", line 560, in upload_artifa...
ah, I see..so I do it in master or in 0.17.5rc3?
sure, but I don't know if this doesn't break something else