Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi Everyone, I’M Getting An Error During Model Upload To S3. The Error Shows Up In The Console Like Below And I Don’T See Any Uploaded Objects In S3:

Hi everyone, I’m getting an error during model upload to S3. The error shows up in the console like below and I don’t see any uploaded objects in S3:
2022-10-10 14:39:35,481 - clearml.storage - ERROR - Failed uploading: cannot schedule new futures after interpreter shutdown 2022-10-10 14:39:35,481 - clearml.storage - ERROR - Failed uploading: cannot schedule new futures after interpreter shutdown 2022-10-10 14:39:35,482 - clearml.storage - ERROR - Failed uploading: cannot schedule new futures after interpreter shutdown 2022-10-10 14:39:35,482 - clearml.storage - ERROR - Exception encountered while uploading Upload failedI have set up the task as
task = Task.init( project_name="ClearML Demo", task_name="FashionMNIST local", output_uri=" ", )and entered the credentials into my local clearml.conf . Also verified that upload to that bucket is possible using normal boto3.
Any help is appreciated - or a corresponding example would work just as well πŸ™

  
  
Posted 2 years ago
Votes Newest

Answers 7


Ok so actually if I run task.flush(wait_for_uploads=True) at the end of the script it works βœ”

  
  
Posted 2 years ago

Out of curiosity, if Task flush worked, when did you get the error, at the end of the process ?

  
  
Posted 2 years ago

So without the flush I got the error apparently at the very end of the script -

Yes... it's a python thing, background threads might get killed in random order, so that when one needs a background thread that died you get this error, which basically should mean you need to do the work in the calling thread.
This actually explains why calling Flush solved the issue.
Nice!

  
  
Posted 2 years ago

πŸ‘ πŸ‘

  
  
Posted 2 years ago

So without the flush I got the error apparently at the very end of the script - all commands of my actual Python code had run.

  
  
Posted 2 years ago

Yes makes sense, it sounded like that from the start. Luckily, the task.flush(...) way seems to work for now πŸ™‚

  
  
Posted 2 years ago

Hi ScantChimpanzee51
btw: this seems like an S3 internal error
https://github.com/boto/s3transfer/issues/197

  
  
Posted 2 years ago