Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hello Everyone.I Have No Idea Why Clearml-Serving Inference Server Tries To Get Model From That Url(Pic 1), But In Clearml Ui I Have Correct Url(Pic 2). Could You Help Me With This?

Hello everyone.I have no idea why clearml-serving inference server tries to get model from that url(pic 1), but in ClearML UI i have correct url(pic 2). Could you help me with this?

  
  
Posted 2 years ago
Votes Newest

Answers 4


ComfortableShark77 it seems the clearml-serving is trying to Upload data to a different server (not download the model)
I'm assuming this has to do with the CLEARML_FILES_HOST, and missing credentials. It has nothing to do with downloading the model (that as you posted, will be from the s3 bucket).
Does that make sense ?

  
  
Posted 2 years ago

docker-compose --env-file example.env -f docker-compose-triton-gpu.yml upfor clearml-serving

  
  
Posted 2 years ago

clearml-serving --id my_service_id model add --engine triton --endpoint "test_ocr_model" --preprocess "preprocess.py" --name "test-model" --project "clear-ml-test-serving-model" --input-size 1 3 384 384 --input-name "INPUT__0" --input-type float32 --output-size 1 -1 --output-name "OUTPUT__0" --output-type int32

  
  
Posted 2 years ago

Hi ComfortableShark77 !

Which commands did you use exactly to deploy the model?

  
  
Posted 2 years ago
1K Views
4 Answers
2 years ago
one year ago
Tags