Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi, We Recently Upgraded Clearml To 1.1.1-135 . 1.1.1 . 2.14. The Task Init Is

Hi, we recently upgraded clearml to 1.1.1-135 . 1.1.1 . 2.14.
The task init is
task=Task.init(project_name='myproject', task_name='mytask', output_uri=' s3://ecs.ai/clearml-models/artifact) '
The trained model is saved in the right bucket and folders but when i want to retrieve and click on it via the artifacts panel -> output model -> model name , ClearML wrongly recognised http://ecs.ai as the bucket, and pointed me to aws s3 instead. This didn't used to be the case. Any advice here?

  
  
Posted 2 years ago
Votes Newest

Answers 16


Hi,
I'm running on Dell ECS storage appliance, which offers S3 compatibility.
yes http://ECS.ai is the DNS name of the server.
ClearML-models is the bucket.
Let me try with ip:port.

  
  
Posted 2 years ago

Hi SubstantialElk6 , I have a few questions:
Are you running minio or some other service to store the data? Is http://ecs.ai the url of the server and clearml-models the bucket? Can you check if accessing by ip:port instead of url works?

  
  
Posted 2 years ago

Hi TimelyPenguin76 ,

If you notice in the last screenshot, it state the bucket name to be http://ecs.ai . It then it tries to open http://s3.amazonaws.com/ecs.ai/clearml-models/artifact/uploading_file?X-Amz-Algorithm= ....

  
  
Posted 2 years ago

No, i can't see the files. But i can see if i don't use ':port' in the URL when uploading. I can't access the machine today, i'll try to check the S3 logs when i'm back.

  
  
Posted 2 years ago

Hi SubstantialElk6 , was this behaviour different in previous versions? If so, in which version?

  
  
Posted 2 years ago

Hi SubstantialElk6 ,

Can you add a screenshot of it? what do you have as MODEL URL ?

  
  
Posted 2 years ago

Hi, when i tried ip:port, it references the right host and bucket....BUT... the file is not found on the ECS S3 even though i can see from the logs that it states Completed model upload to s3://ecs.ai:80/clearml-models/artifacts/ ...

  
  
Posted 2 years ago

Hi. Yup the model was not physically uploaded with the up:port into the bucket, although ClearML does indicate that it's there, except that I can't download it. I also verified this with another S3 client, the model was not there as well.

  
  
Posted 2 years ago

SubstantialElk6 - can you please check the S3 configuration in your clearml.conf? Make sure if has the following:
host: "ecs.ai:80" key: "your key" secret: "your secret" multipart: false secure: false

  
  
Posted 2 years ago

SubstantialElk6 - Not sure I understood. When you set output_uri=' s3://ecs.ai:80/clearml-models/artifact ' , is the model not uploaded to the correct bucket?

  
  
Posted 2 years ago

I didn't track the version on this change in behaviour. But last I tried it was able to download the content after I provide the credentials.

  
  
Posted 2 years ago

I'm asking since as far as I know there was no change in the WebApp S3 driver from previous versions...

  
  
Posted 2 years ago

This is strange then, is it possible for clearml logs to register successfully saving into a S3 storage when actually it isn't? For example, i've seen in past experiences with certain S3 client that saved onto a local folder called 's3:/' instead of putting it on S3 storage itself.

  
  
Posted 2 years ago

Well, a big is always possible, but we haven't seen that kind of behavior reported. As for storing in another place, that's only if the prefix somehow starts with a slash, and we know it doesn't since you can see that in the registered url.

  
  
Posted 2 years ago

The port (:80 in your case) is required, since otherwise both SDK and UI assume this is an AWS S3 bucket

  
  
Posted 2 years ago

Can you not see the files using the ECS files browser?

  
  
Posted 2 years ago