Reputation
Badges 1
13 × Eureka!noted AgitatedDove14 ,
just wondering why the behavior between auto logging and manual upload (using StorageManager
) can yield different results. Do you think we’re using different component here?
If the problem is coming from the GCS, the StorageManager
should also fail, right?
That’s the question i want to raise too, is there any limit on the file size? the size actually ~32 Mb, just using your MNIST example
Can we raise the size limit?
This one: https://github.com/allegroai/clearml/blob/master/examples/frameworks/tensorflow/tensorflow_mnist.py
My change only adding output_uri
to use GCS path
If i want to use the open source one, can we do it?
AgitatedDove14 already done that and it works, my tested command: manager.upload_file
ClearML version: 1.0.2
ClearML Server version: 1.0.0-93
Hello SuccessfulKoala55 , want to follow up again for my question. Is it possible to use auth0 or SSO using open source version?
Thanks AgitatedDove14 , i missed that one.
Thanks AgitatedDove14 ,
I think so, Can we configure the timeout from ClearML interface?
(I’m assuming the upload could take longer).
The next question is about upload the model artifact using cloud storage.
I’m trying to use Google Cloud Storage to store my model checkpoint, however failed with following errors:
2021-05-12 18:51:53,335 - clearml.storage - ERROR - Failed uploading: ('Connection aborted.', timeout('The write operation timed out')) 2021-05-12 18:51:53,335 - clearml.Task - INFO - Completed model upload to
2021-05-12 18:51:54,298 - clearml.Task - INFO - Finished uploading
it said the uploading proces...
Noted AgitatedDove14 , so likely it’s about bandwidth issue. Let me try suggestion from the github first. Thanks man!
Hi AgitatedDove14 , any update on the bug of GCS timeout?
No worries AgitatedDove14 , thanks for helping me.
Just curious about the timeout, was it configured by clearML or the GCS? Can we customize the timeout?
Thanks for confirming AgitatedDove14 , any github issue that i can follow?