Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Unanswered
Hi Everyone, I Wanted To Inquire If It'S Possible To Have Some Type Of Model Unloading. I Know There Was A Discussion Here About It, But After Reviewing It, I Didn'T Find An Answer. So, I Am Curious: Is It Possible To Explicitly Unload A Model (By Calling


Hi @<1523701205467926528:profile|AgitatedDove14> , Thanks for answering, It's not what I meant. Suppose that I have three models and these models can't be loaded simultaneously on GPU memory( since there is not enough GPU ram for all of them at the same time). What I have in mind is this: is there an automatic way to unload a model (for example, if a model hasn't been run in the last 10 minutes, or something similar)? Or, if we don't have such an automatic method, can we manually unload the model from GPU memory to free up space for other models?(I know there is an endpoint for doing so in triton, but I don't know if possible to get access to these endpoint via clearml)?

I don't want it to be completely removed from my endpoints. Please suppose we have endpoint A; then the A model will be unloaded from memory. If we receive a request for A again, it will be loaded back into memory if there is enough space. If there isn't enough room, we can then assess which model to unload (suppose it is model B and we will unload it) to make room for model A.

For now, this is the behavior I observe: Suppose I have two models, A and B.

  • When ClearML is started, the GPU memory usage is almost 0.
  • Then, upon the first request to endpoint A, Model A is loaded into the GPU memory and remains there. At this point, Model B is not loaded.
  • If we then send a request to Model B, it will be loaded into the memory too.
    However, there is no way for me to unload Model A. Consequently, if there is another model, say Model C, it can't be loaded since we run out of memory.
  
  
Posted 10 months ago
113 Views
0 Answers
10 months ago
10 months ago