Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
I’M Trying To Use Minio With Clearml As A External Storage. I Am Having Problems With The Configuration File For The Clearml Client When I Use The Output_Uri Parameter Of Task.Init What Do I Put There? I Am Currently Doing Task.Init(… Output_Uri=“S3://I

I’m trying to use minio with ClearML as a external storage. I am having problems with the configuration file for the ClearML client

When I use the output_uri parameter of Task.init what do I put there?

I am currently doing Task.init(… output_uri=“s3://ipaddr:9000/bucket-name”)

Specifically I’m getting the error “could not access credentials”

I have the default configuration file barring this change

Inside aws — > s3 — > credentials

I replaced host with

ipaddr:port

Key and secret with their appropriate values

secure: true

  
  
Posted one year ago
Votes Newest

Answers 31


I upgraded to 1.9.3 and that didn’t change my error.

I created a new bucket with the name testbucket which didn’t change anything (I only updated this name in the output_uri parameter)

I tried curl on the minio:9000 which returns some html with AccessDenied as content

I tried curl on minio:9001 which returns the minio console html

  
  
Posted one year ago

Hi @<1538330703932952576:profile|ThickSeaurchin47>

Specifically I’m getting the error “could not access credentials”

Put your minio credentials here:
None

  
  
Posted one year ago

I’ll put in the actual copy paste later tonight thank you for the help

  
  
Posted one year ago

and from minio
image

  
  
Posted one year ago

with the filepath leading to /clearml/task.py

  
  
Posted one year ago

First let's verify the conf:

from clearml.config import config_obj
import json
print(json.dumps(config_obj.get("sdk"), indent=2))

what are you getting

  
  
Posted one year ago

No error, just failure to upload it seems
image

  
  
Posted one year ago

This error is thrown by a failed .get() function call on the StorageHandler object I looked at the ._ _ dict _ _.keys() parameter list of the StorageHandler, and I don't see anyway to access the dictionary directly.

  
  
Posted one year ago

Yes I will give it a try and get back to you.

  
  
Posted one year ago

odd message though ... it should have said something about boto3

  
  
Posted one year ago

This is post script completion on the client end

  
  
Posted one year ago

The exact error I am getting is:

 line 1095, in output_uri
    raise ValueError("Could not get access credentials for '{}' "
  
  
Posted one year ago

Yes that is where they are

  
  
Posted one year ago

And to show it off
image

  
  
Posted one year ago

And again thank you for the help with this.

  
  
Posted one year ago

clearml python version: 1.91

could you upgrade to 1.9.3 and try?

Minio is on the same server and the 9000 and 9001 ports are open for tcp

just to be clear, the machine that runs your clearml code can in fact access the minio on port 9000 ?

I tested with the latest and everything seems to work as expected.
BTW: regrading "bucket-name" , make sure it complies with the S3 standard, as a test try to change it to just "bucket" bi hyphens

  
  
Posted one year ago

Is that being used as a dictionary key?

  
  
Posted one year ago

You might only see it when the upload is done

  
  
Posted one year ago

credentials: [

specifies key/secret credentials to use when handli$

{

This will apply to all buckets in this host ($

host: "...:9000”
key: "
*********"
secret: "
******************"
multipart: false
secure: false
}
]
}
boto3 {
pool_connections: 512
max_ multipart_concurrency: 16

  
  
Posted one year ago

Bare with the spacing, I ocr’d this. The quotes and spacing is right

  
  
Posted one year ago

Hold on should host be
` s3://ipaddr:9000?

  
  
Posted one year ago

It gave no import error, and I'm still having problems. I returned to my original script and it shows some file transfer print statements, but I don't see the files appearing in minio

  
  
Posted one year ago

Can you test with the credentials also in the global section
None

key: "************"
secret: "********************"

Also what's the clearml python package version

  
  
Posted one year ago

>>> print(json.dumps(config_obj.get("sdk"), indent=2))
{
  "storage": {
    "cache": {
      "default_base_dir": "~/.clearml/cache"
    },
    "direct_access": [
      {
        "url": "file://*"
      }
    ]
  },
  "metrics": {
    "file_history_size": 100,
    "matplotlib_untitled_history_size": 100,
    "images": {
      "format": "JPEG",
      "quality": 87,
      "subsampling": 0
    },
    "tensorboard_single_series_per_graph": false
  },
  "network": {
    "file_upload_retries": 3,
    "metrics": {
      "file_upload_threads": 4,
      "file_upload_starvation_warning_sec": 120
    },
    "iteration": {
      "max_retries_on_server_error": 5,
      "retry_backoff_factor_sec": 10
    }
  },
  "aws": {
    "s3": {
      "region": "",
      "key": "*************",
      "secret": "****************",
      "use_credentials_chain": false,
      "extra_args": {},
      "credentials": [
        {
          "host": "***.***.**.***:9000",
          "key": "********",
          "secret": "*****************",
          "multipart": false,
          "secure": false
        }
      ]
    },
    "boto3": {
      "pool_connections": 512,
      "max_multipart_concurrency": 16
    }
  },
  "google": {
    "storage": {}
  },
  "azure": {
    "storage": {}
  },
  "log": {
    "null_log_propagate": false,
    "task_log_buffer_capacity": 66,
    "disable_urllib3_info": true
  },
  "development": {
    "task_reuse_time_window_in_hours": 72.0,
    "vcs_repo_detect_async": true,
    "store_uncommitted_code_diff": true,
    "support_stopping": true,
    "default_output_uri": "",
    "force_analyze_entire_repo": false,
    "suppress_update_message": false,
    "detect_with_pip_freeze": false,
    "log_os_environments": [],
    "worker": {
      "report_period_sec": 2,
      "report_event_flush_threshold": 100,
      "ping_period_sec": 30,
      "log_stdout": true,
      "console_cr_flush_period": 10,
      "report_global_mem_used": false
    }
  },
  "apply_environment": false,
  "apply_files": false
}
  
  
Posted one year ago

with ?

                     multipart: false
                     secure: false

If so, can you post here your aws.s3 section of the clearml.conf? (of course replacing the actual sensitive information with *s)

  
  
Posted one year ago

I am able to capture clearml experiments on the clearml server running on the same machine as the minio.

  
  
Posted one year ago

@<1523701205467926528:profile|AgitatedDove14>
clearml python version: 1.91
python version: 3.9.15

the server is running the docker-compose on RHEL
Minio is on the same server and the 9000 and 9001 ports are open for tcp

I changed the default address space from 172.xxx.xxx.xx for docker to another space. This is not the issue as I can replicate this issue without this modified address space.

See configuration file below, I'm running the global section test now

    aws {
        s3 {
            # S3 credentials, used for read/write access by various SDK elements

            # The following settings will be used for any bucket not specified below in the "credentials" section
            # ---------------------------------------------------------------------------------------------------
            region: ""
            # Specify explicit keys
            key: ""
            secret: ""
            # Or enable credentials chain to let Boto3 pick the right credentials. 
            # This includes picking credentials from environment variables, 
            # credential file and IAM role using metadata service. 
            # Refer to the latest Boto3 docs
            use_credentials_chain: false
            # Additional ExtraArgs passed to boto3 when uploading files. Can also be set per-bucket under "credentials".
            extra_args: {}
            # ---------------------------------------------------------------------------------------------------


            credentials: [
                # specifies key/secret credentials to use when handling s3 urls (read or write)
                {
                #     # This will apply to all buckets in this host (unless key/value is specifically provided for a given bucket)
                     host: "***.***.**.***:9000"
                     key: "****************"
                     secret: "********************************"
                     multipart: false
                     secure: false
                }
            ]
        }
        boto3 {
            pool_connections: 512
            max_multipart_concurrency: 16
        }
    }
  
  
Posted one year ago

I placed the same key and secret in the global locations under s3{ } and this did not change anything

  
  
Posted one year ago

I discovered part of the problem. I did not have boto3 installed on this conda env.

  
  
Posted one year ago

Nevermind, 😆 as you thought, seems I was just needing a few more f5s on my dashboard. Thank you very much for this

  
  
Posted one year ago