Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi, I Just Started Using Clearml And I Love To Test It. I Am Trying To Update Existing Models That Were Already Created Before. When I Upload My Custom Model Using Clearml.Storage, I Got Below Error.

HI, I just started using ClearML and I love to test it.
I am trying to update existing models that were already created before.

When I upload my custom model using clearml.storage, I got below error.
clearml.storage - ERROR - Failed creating storage object None Reason: Missing key and secret for S3 storage access ( None )
Below is my code.

task = Task.init(project_name="test_model_update",
                 task_name="myTask")
output_model = OutputModel(task=task)

models_upload_destination = "
"
# models_upload_destination = "
"
output_model.set_upload_destination(uri=models_upload_destination)
output_model.update_weights(
    upload_uri=models_upload_destination,
    weights_filename='model_best.pth',
    auto_delete_file=False
)

I already put my aws s3 credential to ~/clearml.conf in clearml-agent server.

    aws {
        s3 {
            # S3 credentials, used for read/write access by various SDK elements

            # The following settings will be used for any bucket not specified below in>
            # ------------------------------------------------------------------------->
            region: "XXXX"
            # Specify explicit keys
            key: "XXXX"
            secret: "XXXXX"

Is there anything I can solve above issue?

  
  
Posted one year ago
Votes Newest

Answers 16


@<1523701087100473344:profile|SuccessfulKoala55> I changed s3 bucket name None , but still has same error above.

  
  
Posted one year ago

after restarting the docker-compose, then another error appeared

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[6], line 1
----> 1 StorageManager.list("
")

File ~/.clearml/venvs-builds.2/3.8/lib/python3.8/site-packages/clearml/storage/manager.py:452, in StorageManager.list(cls, remote_url, return_full_path, with_metadata)
    430 @classmethod
    431 def list(cls, remote_url, return_full_path=False, with_metadata=False):
    432     # type: (str, bool, bool) -> Optional[List[Union[str, dict]]]
    433     """
    434     Return a list of object names inside the base path or dictionaries containing the corresponding
    435     objects' metadata (in case `with_metadata` is True)
   (...)
    450         None in case of list operation is not supported (http and https protocols for example)
    451     """
--> 452     helper = StorageHelper.get(remote_url)
    453     try:
    454         helper_list_result = helper.list(prefix=remote_url, with_metadata=with_metadata)

File ~/.clearml/venvs-builds.2/3.8/lib/python3.8/site-packages/clearml/storage/helper.py:256, in StorageHelper.get(cls, url, logger, **kwargs)
    253 url = cls._canonize_url(url)
    255 # Get the credentials we should use for this url
--> 256 base_url = cls._resolve_base_url(url)
    258 instance_key = '%s_%s' % (base_url, threading.current_thread().ident or 0)
    259 # noinspection PyBroadException

File ~/.clearml/venvs-builds.2/3.8/lib/python3.8/site-packages/clearml/storage/helper.py:1113, in StorageHelper._resolve_base_url(cls, base_url)
   1111 parsed = urlparse(base_url)
   1112 if parsed.scheme == _Boto3Driver.scheme:
-> 1113     conf = cls._s3_configurations.get_config_by_uri(base_url)
   1114     bucket = conf.bucket
   1115     if not bucket:

File ~/.clearml/venvs-builds.2/3.8/lib/python3.8/site-packages/clearml/backend_config/bucket_config.py:218, in S3BucketConfigurations.get_config_by_uri(self, uri)
    215     except StopIteration:
    216         return None
--> 218 match = find_match(uri)
    220 if match:
    221     return match

File ~/.clearml/venvs-builds.2/3.8/lib/python3.8/site-packages/clearml/backend_config/bucket_config.py:205, in S3BucketConfigurations.get_config_by_uri.<locals>.find_match(uri)
    204 def find_match(uri):
--> 205     self._update_prefixes(refresh=False)
    206     uri = uri.lower()
    207     res = (
    208         config
    209         for config, prefix in self._prefixes
    210         if prefix is not None and uri.startswith(prefix)
    211     )

File ~/.clearml/venvs-builds.2/3.8/lib/python3.8/site-packages/clearml/backend_config/bucket_config.py:92, in BaseBucketConfigurations._update_prefixes(self, refresh)
     87     return
     88 prefixes = (
     89     (config, self._get_prefix_from_bucket_config(config))
     90     for config in self._buckets
     91 )
---> 92 self._prefixes = sorted(prefixes, key=itemgetter(1), reverse=True)

File ~/.clearml/venvs-builds.2/3.8/lib/python3.8/site-packages/clearml/backend_config/bucket_config.py:89, in <genexpr>(.0)
     86 if self._prefixes and not refresh:
     87     return
     88 prefixes = (
---> 89     (config, self._get_prefix_from_bucket_config(config))
     90     for config in self._buckets
     91 )
     92 self._prefixes = sorted(prefixes, key=itemgetter(1), reverse=True)

File ~/.clearml/venvs-builds.2/3.8/lib/python3.8/site-packages/clearml/backend_config/bucket_config.py:193, in S3BucketConfigurations._get_prefix_from_bucket_config(self, config)
    191     bucket = prefix.path.segments[0]
    192     prefix.path.segments.pop(0)
--> 193     prefix.set(netloc=bucket)
    195 return str(prefix)

File ~/.clearml/venvs-builds.2/3.8/lib/python3.8/site-packages/furl/furl.py:1721, in furl.set(self, args, path, fragment, query, scheme, username, password, host, port, netloc, origin, query_params, fragment_path, fragment_args, fragment_separator)
   1718     self.password = password
   1719 if netloc is not _absent:
   1720     # Raises ValueError on invalid port or malformed IP.
-> 1721     self.netloc = netloc
   1722 if origin is not _absent:
   1723     # Raises ValueError on invalid port or malformed IP.
   1724     self.origin = origin

File ~/.clearml/venvs-builds.2/3.8/lib/python3.8/site-packages/furl/furl.py:1889, in furl.__setattr__(self, attr, value)
   1885 def __setattr__(self, attr, value):
   1886     if (not PathCompositionInterface.__setattr__(self, attr, value) and
   1887        not QueryCompositionInterface.__setattr__(self, attr, value) and
   1888        not FragmentCompositionInterface.__setattr__(self, attr, value)):
-> 1889         object.__setattr__(self, attr, value)

File ~/.clearml/venvs-builds.2/3.8/lib/python3.8/site-packages/furl/furl.py:1534, in furl.netloc(self, netloc)
   1529     host = netloc
   1531 # Avoid side effects by assigning self.port before self.host so
   1532 # that if an exception is raised when assigning self.port,
   1533 # self.host isn't updated.
-> 1534 self.port = port  # Raises ValueError on invalid port.
   1535 self.host = host
   1536 self.username = None if username is None else unquote(username)

File ~/.clearml/venvs-builds.2/3.8/lib/python3.8/site-packages/furl/furl.py:1889, in furl.__setattr__(self, attr, value)
   1885 def __setattr__(self, attr, value):
   1886     if (not PathCompositionInterface.__setattr__(self, attr, value) and
   1887        not QueryCompositionInterface.__setattr__(self, attr, value) and
   1888        not FragmentCompositionInterface.__setattr__(self, attr, value)):
-> 1889         object.__setattr__(self, attr, value)

File ~/.clearml/venvs-builds.2/3.8/lib/python3.8/site-packages/furl/furl.py:1476, in furl.port(self, port)
   1474     self._port = int(str(port))
   1475 else:
-> 1476     raise ValueError("Invalid port '%s'." % port)

ValueError: Invalid port ''.
  
  
Posted one year ago

So where are the key/secret in the sdk.aws.s3... section?

  
  
Posted one year ago

this is my client side’s clearml.conf file
I think it is almost similar with agent’s clearml.conf file

  
  
Posted one year ago

Thank you for your advice @<1523701087100473344:profile|SuccessfulKoala55>
I am really appreciate it.

  
  
Posted one year ago

You can use the CLEARML_CONFIG_FILE environment variable - the agent will read it and use the path there to load the file, instead of using the default location (please note that you should pass it just to the agent, the SDK will also read this env var if it's system-wide)

  
  
Posted one year ago

ah alright, thanks!

  
  
Posted one year ago

This seems to be some error in the configuration... Can you attach a more complete example of your clearml.conf file?

  
  
Posted one year ago

here is a full clearml.conf file

  
  
Posted one year ago

Oh, I realize the misunderstanding now...

I already put my aws s3 credential to

~/clearml.conf

in clearml-agent server.

This setting should be in the client side (i.e. in your local clearml.conf file, if you're running the code locally, or in the agent's clearml.conf file, if you're using an agent)...

  
  
Posted one year ago

here is the key/secret


sdk {
    aws {
        s3 {
            region: "ap-northeast-2"
            use_credentials_chain: false
            extra_args: {}
            credentials: [
                {
                    bucket: "
"
                    key: "S3_KEY"
                    secret: "S3_SECRET"
                }
            ]
        }
    }
}
  
  
Posted one year ago

And the clearml.conf file you just shared does not contain it

  
  
Posted one year ago

bucket should be just a bucket name, not the complete URI

  
  
Posted one year ago

in windows, the clearml.conf would be located in the User folder. Is there any way to configure it to be moved inside the project's folder, or maybe configure it using cli?

  
  
Posted one year ago

Thank you for your advice!
I will change the s3 bucket that does not have a dot and try again.

  
  
Posted one year ago

Hi @<1566959357147484160:profile|LazyCat94> , can you perhaps try with a bucket that does not have a dot ( . ) in it's name? This is not recommended according to AWS guidelines ( None , even though it is allowed), and it's possible our parsing did not take that into account

  
  
Posted one year ago
1K Views
16 Answers
one year ago
one year ago
Tags
aws