
Reputation
Badges 1
42 × Eureka!Hi @<1523701205467926528:profile|AgitatedDove14> you are right for the docker setup. But with the k8s setup I get the error Poll failed for model directory 'advanced_basic_classifier.pytorch': unexpected 'platform' and 'backend' pair, got:, pytorch
when I do not specify the platform, which sounds like I should specify the platform.
Btw if I do not name the model after the 'model.<backend_name>' convention then I get this error
`Poll failed for model directory 'advanced_basic_classifi...
FYI: I just posted an issue on github None
Hi @<1523701070390366208:profile|CostlyOstrich36> , I just have solved the issue! :) After calling clearml-serving create --name "model serving"
the printed task id has to be filled in the values.yaml of the clearml-serving helm chart under clearml.servingTaskId. After installing the helm chart, the draft of the service task is started automatically so there is no need to manually enqueue it.
Would it be possible to add this info to the docs? Maybe a small hint on this page [None](https...
Hi @<1523701070390366208:profile|CostlyOstrich36> , of course! Here it is (with blurred urls, paths and account names)
Hi @<1523701827080556544:profile|JuicyFox94> I figured out what the problem is! For some recent experimentation I set an acces_key and secret_key as environment variables in my os. When I deleted them everything worked fine so the environment variables overwrote the keys given by the clearml.conf. Is that the desired default behaviour?
And just one tip for everbody having similar problems: Switch to using the SDK instead of the CLI for better debugging. This helped me to find the cause of m...
The clearml-data call results in these two lines in the ingress logs. Is that sufficient or would you like to have a larger section of the log?
2024/03/26 16:07:10 [warn] 2879#2879: *1151249 upstream sent duplicate header line: "server: clearml", previous value: "Server: Werkzeug/3.0.1 Python/3.9.18", ignored while reading response header from upstream, client: ***.***.***.22, server: api.clearml.****.com, request: "GET /auth.login HTTP/1.1", upstream: "
", host: "api.clearm...
Hi @<1523701087100473344:profile|SuccessfulKoala55> , thanks for your message! 🙂 I am aware that the console is also logged on the server, but I somehow find it not optimal to look for relevant information in the console log and would like to place the information in a more structured way.
Hi @<1523701205467926528:profile|AgitatedDove14> thanks for your answer! 🙂 I think my case is a bit different. I do not want to load a custom model but I want to load a custom object used for preprocessing. So I think the load method would not fit, as the local_file_name
parameter I get in the load function would lead to the model file. And as far as I can see there is no mechanism installed to load other objects than the model file inside the Preprocess class, right?
Hi @<1523701205467926528:profile|AgitatedDove14> , thanks for your answer!
I reached over 1M API calls in about one week using clearml-serving on one machine with only calling the deployed model a few hundred times for testing purpose. So I wanted to dig a little bit deeper on that. Thanks for the channel suggestion, I will repost my question there. :)
What do you mean by "How are you creating the model?"? I executed a pytorch model training saved a traced version of the model so that saved with the executed task. This was also no problem with the docker container setup.
Hi @<1523701205467926528:profile|AgitatedDove14> , now there are some interesting things happening: Like I wrote before I got the error message but one minute later the model was added successfully nonetheless. The log says
E0603 09:43:01.652550 41 model_repository_manager.cc:996] Poll failed for model directory 'test_model_pytorch': Invalid model name: Could not determine backend for model 'test_model_pytorch' with no backend in model configuration. Expected model name of the form 'mo...