Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Unanswered
Problem With Clearml Serving Instance Cleanup Hi Team, I’M Running Into An Issue With Clearml Serving Using The Helm Chart To Deploy Our Ml Models Via

Problem with ClearML Serving Instance Cleanup

Hi team, I’m running into an issue with ClearML Serving using the Helm chart to deploy our ML models via clearml-serving model add . The deployment itself works fine, and the models are served as expected.

However, I’ve observed the following problems when managing model removal:

  • Model Cleanup Issue : When I remove a model using clearml-serving model remove , the model data is not removed from ClearML Serving. Specifically, the temporary directory ( /tmp ) where the inference container copies the model from /root/.clearml/cache is not cleaned up. This accumulation continues until the disk space on the node is completely exhausted.
    Potential Impact:
  • Nodes run out of space due to uncleaned /tmp directories.
    Steps Taken:
  • Models added via clearml-serving model add and removed with clearml-serving model remove .
    Looking for:
  • Solutions or workarounds to automatically clean up the /tmp folder after model removal.
    Has anyone faced similar issues or found effective solutions to these challenges? Any tips or best practices would be greatly appreciated!
  
  
Posted 10 days ago
Votes Newest

Answers

40 Views
0 Answers
10 days ago
9 days ago
Tags