Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi Everyone, I Have A Few Questions To Understand The Clearml-Serving A Little Better (And How Much Resources To Allocate For The Serving Pods): Are The Models I Defined To Be Served E.G. Via The Cli Downloaded To The Serving Pod? So That They Are Physica

Hi everyone,
I have a few questions to understand the clearml-serving a little better (and how much resources to allocate for the serving pods): Are the models I defined to be served e.g. via the CLI downloaded to the serving pod? So that they are physically lying there as a file I can see in the filesystem? Or is it more the case that the pod gets the model when needed/when an API call for this model is incoming?

  
  
Posted 10 months ago
Votes Newest

Answers 4


Thanks again, I was able to locate the files. And it was indeed (as most of the time with k8s) a general routing issue. After fixing this everything works fine 🙂

  
  
Posted 10 months ago

Hi @<1649221394904387584:profile|RattySparrow90>

: Are the models I defined to be served e.g. via the CLI downloaded to the serving pod

Yes this is done automatically and online (i.e. when you update the using CLI/API) , based on the models/endpoints you set

So that they are physically lying there as a file I can see in the filesystem?

They are, and cached there

Or is it more the case that the pod gets the model when needed/when an API call for this model is incoming?

It downloads and loads it when the endpoint is created/updated, but there is always some "warmup" that the first requests will trigger.

  
  
Posted 10 months ago

@<1523701205467926528:profile|AgitatedDove14> Thanks for the explanations. Where exactly are the model files stored on the pod? I was not able to find them.
The reason I ask is that the clearml serving pod is up and running and from the logs and the logs of the fileserver ist seems that the model and the preprocessing code was loaded.
Currently I encounter the problem that I always get a 404 HTTP error when I try to access the model via the defaultBaseServeUrl + model endpoint and I would like to track down whether it is a model loading problem or whether the routing to the pod is not working correctly

  
  
Posted 10 months ago

Where exactly are the model files stored on the pod?

clearml cache folder, usually under ~/.clearml

Currently I encounter the problem that I always get a 404 HTTP error when I try to access the model via the...

How are you deploying it? I would start by debugging and runnign everything in the docker-compose (single machine) make sure you have everything running, and then deploy to the cluster
(becuase on a cluster level, it could be a general routing issue, way before getting to the actual pod)

  
  
Posted 10 months ago