So I have switched back to ssl to give a try to the script again - and it works with ssl now.
I even have tried it with big files - still works.
SweetBadger76 thanks for giving a hand - don't know what was the issue but now that works.
Hi SweetBadger76
So - I have turned off SSL for minio and tried a test script for uploading those two artifacts.
The result is that it works - the file got uploaded to a bucket.
Although it has taken a long time to finish upload and the files are less than 1Mb$ python3 test.py ClearML Task: overwriting (reusing) task id=72e7c0b098e14197a9ffe82d7444337f ClearML results page:
2022-06-10 14:14:00,894 - clearml.Task - INFO - Waiting to finish uploads 2022-06-10 14:14:11,888 - clearml.Task - INFO - Finished uploading
Hi David. Sorry I got stuck with agent in docker mode training on multiple GPUs. Will get that sorted and finish that stuff with minio.
hey Sergios
did you managed to get the files on minio ?
Sure will do and get back to you - not that difficult.
Yes I think that it would be great if you could try to run it with a simpler configuration
Currently I have the following config re S3:
` aws {
s3 {
# default, used for any bucket not specified below
key: ""
secret: ""
region: ""
credentials: [
{
# This will apply to all buckets in this host (unless key/value is specifically provided for a given bucket)
host: "mydomain.com:9000"
key: "minio"
secret: "secret data"
multipart: false
secure: true
}
]
}
boto3 {
pool_connections: 512
max_multipart_concurrency: 16
}
} `
Hi David,
In my case I have a remote minio with ssl enabled - do you want me to run a local one with HTTP to test if all works fine in that config?
Also, change this line of the conf file to false :
development {
# Development-mode options
# dev task reuse window
task_reuse_time_window_in_hours: 72.0
# Run VCS repository detection asynchronously
vcs_repo_detect_async: true <== change to false
ok. Let's first be sure that your conf file is correct.
aws {
s3 {
key: "david"
secret: "supersecret"
use_credentials_chain: false
credentials: [
{
# This will apply to all buckets in this host (unless key/value is specifically provided for a given bucket)
host: "localhost:9000"
key: "david"
secret: "supersecret"
multipart: false
secure: false
}
SweetBadger76 thanks for looking into this. Here's a screenshot that displays files in clearML that should be available in minio. I can see them in clearML (I refer to this as clearML metadata) but when I press the link it redirects me to minio and shows that this file is not there. Also when I explore minio with console - I don't see those files there. But notebooks and datasets get uploaded just fine.
Hi GentleSwallow91 ,
I can't manage to reproduce the issue, it is working fine for me. I use a local minio docker-based image. The conf file has to be precisely configured, but it seems that you did it ok, because you don't have a denied access here. It is strange that he is waiting for the upload to finish. We have this flag for upload_artefact : wait_on_upload . His default value should be False, but i would try to add it...
Also I don't understand what you mean by " I can see files in ClearML GUI in metadata but not in minio". Do you have the previews ? Then you could check where they are stored via the developer tool.
Hello Sergios,
We are working on reproducing your issue. We will update you asap
Good weekend to all! Any update on this? Thanks!
I am using
WebApp: 1.5.0-186
Server: 1.5.0-186
API: 2.18
On client side:
clearml==1.4.1
clearml-agent==1.2.3
hi GentleSwallow91
Concerning the warning message, there is an entry in the FAQ. Here is the link :
https://clear.ml/docs/latest/docs/faq/#resource_monitoring
We are working on reproducing your issue
Hi GentleSwallow91 , What version of clearml are you using?