Is that being used as a dictionary key?
The exact error I am getting is:
line 1095, in output_uri
raise ValueError("Could not get access credentials for '{}' "
@<1538330703932952576:profile|ThickSeaurchin47> can you try the artifacts example:
None
and in this line do:
task = Task.init(project_name='examples', task_name='Artifacts example', output_uri="
")
You might only see it when the upload is done
No error, just failure to upload it seems
And again thank you for the help with this.
>>> print(json.dumps(config_obj.get("sdk"), indent=2))
{
"storage": {
"cache": {
"default_base_dir": "~/.clearml/cache"
},
"direct_access": [
{
"url": "file://*"
}
]
},
"metrics": {
"file_history_size": 100,
"matplotlib_untitled_history_size": 100,
"images": {
"format": "JPEG",
"quality": 87,
"subsampling": 0
},
"tensorboard_single_series_per_graph": false
},
"network": {
"file_upload_retries": 3,
"metrics": {
"file_upload_threads": 4,
"file_upload_starvation_warning_sec": 120
},
"iteration": {
"max_retries_on_server_error": 5,
"retry_backoff_factor_sec": 10
}
},
"aws": {
"s3": {
"region": "",
"key": "*************",
"secret": "****************",
"use_credentials_chain": false,
"extra_args": {},
"credentials": [
{
"host": "***.***.**.***:9000",
"key": "********",
"secret": "*****************",
"multipart": false,
"secure": false
}
]
},
"boto3": {
"pool_connections": 512,
"max_multipart_concurrency": 16
}
},
"google": {
"storage": {}
},
"azure": {
"storage": {}
},
"log": {
"null_log_propagate": false,
"task_log_buffer_capacity": 66,
"disable_urllib3_info": true
},
"development": {
"task_reuse_time_window_in_hours": 72.0,
"vcs_repo_detect_async": true,
"store_uncommitted_code_diff": true,
"support_stopping": true,
"default_output_uri": "",
"force_analyze_entire_repo": false,
"suppress_update_message": false,
"detect_with_pip_freeze": false,
"log_os_environments": [],
"worker": {
"report_period_sec": 2,
"report_event_flush_threshold": 100,
"ping_period_sec": 30,
"log_stdout": true,
"console_cr_flush_period": 10,
"report_global_mem_used": false
}
},
"apply_environment": false,
"apply_files": false
}
Yes I will give it a try and get back to you.
Nevermind, 😆 as you thought, seems I was just needing a few more f5s on my dashboard. Thank you very much for this
clearml python version: 1.91
could you upgrade to 1.9.3 and try?
Minio is on the same server and the 9000 and 9001 ports are open for tcp
just to be clear, the machine that runs your clearml code can in fact access the minio on port 9000 ?
I tested with the latest and everything seems to work as expected.
BTW: regrading "bucket-name" , make sure it complies with the S3 standard, as a test try to change it to just "bucket" bi hyphens
This is post script completion on the client end
I am able to capture clearml experiments on the clearml server running on the same machine as the minio.
This error is thrown by a failed .get()
function call on the StorageHandler
object I looked at the ._ _ dict _ _.keys() parameter list of the StorageHandler, and I don't see anyway to access the dictionary directly.
First let's verify the conf:
from clearml.config import config_obj
import json
print(json.dumps(config_obj.get("sdk"), indent=2))
what are you getting
Hi @<1538330703932952576:profile|ThickSeaurchin47>
Specifically I’m getting the error “could not access credentials”
Put your minio credentials here:
None
Bare with the spacing, I ocr’d this. The quotes and spacing is right
credentials: [
specifies key/secret credentials to use when handli$
{
#
This will apply to all buckets in this host ($
host: "...:9000”
key: "*********"
secret: "******************"
multipart: false
secure: false
}
]
}
boto3 {
pool_connections: 512
max_ multipart_concurrency: 16
I’ll put in the actual copy paste later tonight thank you for the help
Can you test with the credentials also in the global section
None
key: "************"
secret: "********************"
Also what's the clearml python package version
It gave no import error, and I'm still having problems. I returned to my original script and it shows some file transfer print statements, but I don't see the files appearing in minio
odd message though ... it should have said something about boto3
with ?
multipart: false
secure: false
If so, can you post here your aws.s3 section of the clearml.conf? (of course replacing the actual sensitive information with *s)
Hold on should host be
` s3://ipaddr:9000?
I placed the same key and secret in the global locations under s3{ } and this did not change anything
I upgraded to 1.9.3 and that didn’t change my error.
I created a new bucket with the name testbucket which didn’t change anything (I only updated this name in the output_uri parameter)
I tried curl on the minio:9000 which returns some html with AccessDenied as content
I tried curl on minio:9001 which returns the minio console html
with the filepath leading to /clearml/task.py
I discovered part of the problem. I did not have boto3 installed on this conda env.