Thanks again, I was able to locate the files. And it was indeed (as most of the time with k8s) a general routing issue. After fixing this everything works fine 🙂
Where exactly are the model files stored on the pod?
clearml cache folder, usually under ~/.clearml
Currently I encounter the problem that I always get a 404 HTTP error when I try to access the model via the...
How are you deploying it? I would start by debugging and runnign everything in the docker-compose (single machine) make sure you have everything running, and then deploy to the cluster
(becuase on a cluster level, it could be a general routing issue, way before getting to the actual pod)
@<1523701205467926528:profile|AgitatedDove14> Thanks for the explanations. Where exactly are the model files stored on the pod? I was not able to find them.
The reason I ask is that the clearml serving pod is up and running and from the logs and the logs of the fileserver ist seems that the model and the preprocessing code was loaded.
Currently I encounter the problem that I always get a 404 HTTP error when I try to access the model via the defaultBaseServeUrl + model endpoint and I would like to track down whether it is a model loading problem or whether the routing to the pod is not working correctly
Hi @<1649221394904387584:profile|RattySparrow90>
: Are the models I defined to be served e.g. via the CLI downloaded to the serving pod
Yes this is done automatically and online (i.e. when you update the using CLI/API) , based on the models/endpoints you set
So that they are physically lying there as a file I can see in the filesystem?
They are, and cached there
Or is it more the case that the pod gets the model when needed/when an API call for this model is incoming?
It downloads and loads it when the endpoint is created/updated, but there is always some "warmup" that the first requests will trigger.