Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Unanswered
[Clearml Serving] Hi Everyone! I Am Trying To Automatically Generate An Online Endpoint For Inference When Manually Adding Tag


Hi @<1523701205467926528:profile|AgitatedDove14> .,

Thanks a lot for your quick reply! 🙏 In fact, I am more interested in using the same endpoint with latest model version than effectively creating an endpoint on tagging.

Your statement makes sense, it seems that we have anyway to create an endpoint with model add prior to set up automatic model deployment with model auto-update . This seems to work since section "LINEAGE" under my latest trained model gets updated with information of my running Serving Service less than a minute after adding the tag "released" to it.

However, I am just confused because of two points:

  • I notice that, in my Serving Service situated in the DevOps project, the "endpoints" section doesn't seem to get updated when I tag a new model with "released". In fact, the model_id is still the one of the original model ( d53b2257... ) (and not the one of the latest trained model).
  • The tutorials (for sklearn and PyTorch ) explicitly say to perform inference with test_model_pytorch_auto/1 (i.e., curl -X POST " [None](http://127.0.0.1:8080/serve/test_model_sklearn_auto/1) " -H "accept: application/json" -H "Content-Type: application/json" -d '{"x0": 1, "x1": 2}') (which doesn't work) instead of test_model_pytorch (i.e., curl -X POST " [None](http://127.0.0.1:8080/serve/test_model_pytorch) " -H "accept: application/json" -H "Content-Type: application/json" -d '{"x0": 1, "x1": 2}') (which works, but I am not sure it is using the latest trained model... ).
    Can you confirm me that latest model is effectively used for inference even if "endpoints" section still seems to be configured with "model_id" of original trained model ( d53b2257... )? (See screenshots below ⤵ )

Thank you again for your support.

Best regards!
image
image
image
image

  
  
Posted 5 months ago
56 Views
0 Answers
5 months ago
5 months ago