Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
ApprehensiveSeaturtle9
Moderator
4 Questions, 4 Answers
  Active since 27 March 2024
  Last activity 4 months ago

Reputation

0

Badges 1

4 × Eureka!
0 Votes
0 Answers
461 Views
0 Votes 0 Answers 461 Views
Hi, on clearml-serving, where should I add CLEARML_TRITON_HELPER_ARGS ? It does not seem to work in the example.env , should I rebuild the triton docker?
7 months ago
0 Votes
2 Answers
544 Views
0 Votes 2 Answers 544 Views
Hi! Is it useful to create a model monitoring and an endpoint? An endpoint is not enough to use clearml-serving?
7 months ago
0 Votes
3 Answers
419 Views
0 Votes 3 Answers 419 Views
Hello, how do you manage to unload a model from clearml-serving API? I am trying to unload a model through grpc via clearml-serving because the models are lo...
5 months ago
0 Votes
5 Answers
531 Views
0 Votes 5 Answers 531 Views
Hello, about clearml-serving: I uploaded a model, a pre-processing and create an endpoint. I now want to remove these artifacts. Based on None , for the endp...
7 months ago
0 Hello, How Do You Manage To Unload A Model From Clearml-Serving Api? I Am Trying To Unload A Model Through Grpc Via

Thank you for your answer, I added 100s models in the serving session, and when I send a post request it loads the willing model to perform an inference. I would like to be able to send a request to unload the model (because I cannot load all the models in gpu, only 7-8) or as @<1690896098534625280:profile|NarrowWoodpecker99> suggests add a timeout ? Or unload all the models if the gpu memory reach a limit ? Do you have a suggestion on how I could achieve that? Thanks!

4 months ago
0 Hi! Is It Useful To Create A Model Monitoring And An Endpoint? An Endpoint Is Not Enough To Use Clearml-Serving?

Thank you for your answer, for the moment I am doing request_processor.add_endpoint(...) to create an endpoint to be used with triton engine. I do not need to do add_model_monitoring ? what is the advantage of adding a model monitoring instead of an endpoint?

7 months ago
0 Hello, About Clearml-Serving: I Uploaded A Model, A Pre-Processing And Create An Endpoint. I Now Want To Remove These Artifacts. Based On

It does but not the OutputModel and the preprocess artifact. I managed to do it by adding:

if _task.artifacts.get(model_endpoint.preprocess_artifact):
    _task.delete_artifacts([model_endpoint.preprocess_artifact])
Model.remove(model_endpoint.model_id)

Maybe this should be add to the func_model_remove method?

7 months ago
0 Hello, About Clearml-Serving: I Uploaded A Model, A Pre-Processing And Create An Endpoint. I Now Want To Remove These Artifacts. Based On

Hi thank you for your answer, this command call the method func_model_remove which remove the endpoint , model_monitoring and canary_endpoint but it does not remove the OutputModel and the py_code_mymodel.py (preprocessing) from the serving service

7 months ago