Reputation
Badges 1
10 × Eureka!By the way, is there the possibility to decrease the log level of the api server and the file server? In the ClearML serving deployment a uvicorn log level environment variable can be set. Is there something similar available for the ClearML api and file server? I searched a little bit in the code and did not really find a place where the log level is defined
@<1523701087100473344:profile|SuccessfulKoala55> Only one
@<1673863775326834688:profile|SucculentMole19> FYI
Perfect, thanks for the info!
@<1523701205467926528:profile|AgitatedDove14> Thanks for the explanations. Where exactly are the model files stored on the pod? I was not able to find them.
The reason I ask is that the clearml serving pod is up and running and from the logs and the logs of the fileserver ist seems that the model and the preprocessing code was loaded.
Currently I encounter the problem that I always get a 404 HTTP error when I try to access the model via the defaultBaseServeUrl + model endpoint and I would li...
@<1523701205467926528:profile|AgitatedDove14> I experience the exact same behaviour for the clearml-serving (version 1.3.0). Status of the serving-task goes to Aborted, status message is also "Forced stop (non-responsive)" and also after a while of no incoming traffic
Thanks again, I was able to locate the files. And it was indeed (as most of the time with k8s) a general routing issue. After fixing this everything works fine 🙂
@<1729671499981262848:profile|CooperativeKitten94> Thanks again for your help, the number of logs are now reduced significantly 👍 . A short follow up question: Can I somehow control the log level of the agent/worker in the k8s cluster? Any environment variable I can set or add to there? I digged a bit into the code of the agent and for me it seems that the log level is hard-coded to 'INFO'. Is this correct?
@<1729671499981262848:profile|CooperativeKitten94> Perfect, thanks. Will try this out
@<1523701070390366208:profile|CostlyOstrich36> Thanks for the clarification 👍