Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
PlainSealion45
Moderator
1 Question, 8 Answers
  Active since 17 November 2023
  Last activity 5 months ago

Reputation

0

Badges 1

8 × Eureka!
0 Votes
14 Answers
309 Views
0 Votes 14 Answers 309 Views
[ClearML Serving] Hi everyone! I am trying to automatically generate an online endpoint for inference when manually adding tag released to a model. For this ...
5 months ago
0 [Clearml Serving] Hi Everyone! I Am Trying To Automatically Generate An Online Endpoint For Inference When Manually Adding Tag

Well, after testing, I observed two things:

  • When using automatic model deployment and training several models to which tag "released" was added, the model_id in the "endpoints" section of the Serving Service persistently presents the ID of the initial model that was used to create the endpoint (and NOT the one of the latest trained model) (see first picture below โคต ). This is maybe the way it is implemented in ClearML, but a bit non-intuitive since, when using automa...
5 months ago
0 [Clearml Serving] Hi Everyone! I Am Trying To Automatically Generate An Online Endpoint For Inference When Manually Adding Tag

Hi @<1523701205467926528:profile|AgitatedDove14> .,

Thanks a lot for your quick reply! ๐Ÿ™ In fact, I am more interested in using the same endpoint with latest model version than effectively creating an endpoint on tagging.

Your statement makes sense, it seems that we have anyway to create an endpoint with model add prior to set up automatic model deployment with model auto-update . This seems to work since section "LINEAGE" under my latest trained model gets updated with infor...

5 months ago
0 [Clearml Serving] Hi Everyone! I Am Trying To Automatically Generate An Online Endpoint For Inference When Manually Adding Tag

Thank you so much for your reply Martin!
It's clear to me now.
Let's see if this works! I will try waiting those 5 minutes at the beginning of next week and let you know if I can obtain an updated endpoint with the new model id of the latest trained model!
Have a nice weekend!
Best regards.

5 months ago
0 [Clearml Serving] Hi Everyone! I Am Trying To Automatically Generate An Online Endpoint For Inference When Manually Adding Tag

Okay, so that's surely the reason why the model is not found, I will investigate that, thank you again for your insight! ๐Ÿ™

5 months ago
0 [Clearml Serving] Hi Everyone! I Am Trying To Automatically Generate An Online Endpoint For Inference When Manually Adding Tag

Hi @<1523701205467926528:profile|AgitatedDove14> ,

Just for verifying which model is actually called by the endpoint when using model auto-update for automatic model deployment I performed following steps with ClearML Serving PyTorch example :

  1. I modified the code of train_pytorch_mnist.py in the train function with target = torch.zeros(data.shape[0]).long() in order for the model to bel...
5 months ago
0 [Clearml Serving] Hi Everyone! I Am Trying To Automatically Generate An Online Endpoint For Inference When Manually Adding Tag

Hi @<1523701205467926528:profile|AgitatedDove14> !

Thank you again for coming back to me with your support!

  • ๐Ÿ‘ Thank you, I have noticed that (when using model auto-update ) different model versions (with their own model_id ) appear under "model_monitoring_eps" section of the serving service.
  • ๐Ÿ‘ It's now clear to me that I was always accessing the static endpoint (that I created with model add ) with my curl command.
    I retested automatic model deployment with you...
5 months ago
0 [Clearml Serving] Hi Everyone! I Am Trying To Automatically Generate An Online Endpoint For Inference When Manually Adding Tag

Hi @<1523701205467926528:profile|AgitatedDove14> !
Thank you for having a look at this log file ๐Ÿ™ .
Effectively, the Triton backend was not able to load my model. I will investigate this issue that is surely related to my own GPU backend (GeForce RTX 4070 Ti), I suppose ClearML PyTorch example works for other users. I am not sure this is related to the fact the model is not correctly converted to Tor...

5 months ago
0 [Clearml Serving] Hi Everyone! I Am Trying To Automatically Generate An Online Endpoint For Inference When Manually Adding Tag

Hi @<1523701205467926528:profile|AgitatedDove14> .,

Of course! The output of curl -X POST command is at least reassuring, it shows that the automatic endpoint works. As you say, the RPC error when sending request seems to be returned from the GPU backend.

Nothing gets printed in docker compose log when sending the curl -X POST , but beforehand following log is displayed for clearml-serving-triton container with among others `WARNING: [Torch-TensorRT] - Unable to read CUDA capab...

5 months ago