Reputation
Badges 1
42 × Eureka!My pre- and postprocessing code should be correct, because it already worked when I used the docker container clearml-serving setup. But in case you want to have a look, here it is:
Hi @<1523701435869433856:profile|SmugDolphin23> , thanks for your question. For now I just deleted the requirements.txt and let ClearML track the requirements automatically and it works. For long term I would still like to use a requirements.txt, so I will come back to this topic a little later.
Hi @<1523701070390366208:profile|CostlyOstrich36> , I just have solved the issue! :) After calling clearml-serving create --name "model serving"
the printed task id has to be filled in the values.yaml of the clearml-serving helm chart under clearml.servingTaskId. After installing the helm chart, the draft of the service task is started automatically so there is no need to manually enqueue it.
Would it be possible to add this info to the docs? Maybe a small hint on this page [None](https...
The clearml-data call results in these two lines in the ingress logs. Is that sufficient or would you like to have a larger section of the log?
2024/03/26 16:07:10 [warn] 2879#2879: *1151249 upstream sent duplicate header line: "server: clearml", previous value: "Server: Werkzeug/3.0.1 Python/3.9.18", ignored while reading response header from upstream, client: ***.***.***.22, server: api.clearml.****.com, request: "GET /auth.login HTTP/1.1", upstream: "
", host: "api.clearm...
Hi @<1523701827080556544:profile|JuicyFox94> I figured out what the problem is! For some recent experimentation I set an acces_key and secret_key as environment variables in my os. When I deleted them everything worked fine so the environment variables overwrote the keys given by the clearml.conf. Is that the desired default behaviour?
And just one tip for everbody having similar problems: Switch to using the SDK instead of the CLI for better debugging. This helped me to find the cause of m...
Hi @<1523701070390366208:profile|CostlyOstrich36> , of course! Here it is (with blurred urls, paths and account names)
Full log:
` Current configuration (clearml_agent v1.5.1, location: C:/Users/USER~1/AppData/Local/Temp/.clearml_agent.g6ysfs_g.cfg):
agent.worker_id = HPZBook:0
agent.worker_name = HPZBook
agent.force_git_ssh_protocol = false
agent.python_binary =
agent.package_manager.type = pip
agent.package_manager.pip_version.0 = <20.2 ; python_version < '3.10'
agent.package_manager.pip_version.1 = <22.3 ; python_version >= '3.10'
agent.package_manager.system_site_packages = false
...
Ok, so I killed all docker containers (the proposal by chatgpt did not work for me, but your commands did). The result is that we have one less warning. The warning clearml-serving-triton | Warning: more than one valid Controller Tasks found, using Task ID=4709b0b383a04bb1a033e99fd325dcbf
seems to be solved. All remaining errors come up in the clearml-serving-triton service and this is the log I get
CLEARML_SERVING_TASK_ID=9309c20af9244d919b0f063642198c57
CLEARML_TRITON_POLL...
Hi @<1523701205467926528:profile|AgitatedDove14> , now there are some interesting things happening: Like I wrote before I got the error message but one minute later the model was added successfully nonetheless. The log says
E0603 09:43:01.652550 41 model_repository_manager.cc:996] Poll failed for model directory 'test_model_pytorch': Invalid model name: Could not determine backend for model 'test_model_pytorch' with no backend in model configuration. Expected model name of the form 'mo...
Thank you, I did not think about that. It helped a lot! I found out that the problem causing the unicode error was, that only the 'python' command was set up on my windows machine, but not the 'python3' command. This was the exact error for documentation:
DEBUG:clearml_agent.commands.worker:Searching for python3
Traceback (most recent call last):
File "C:\Users\User\venvs\clearml\lib\site-packages\clearml_agent\helper\process.py", line 204, in normalize_exception
yield
File "C:\...
You're very welcome, thank you again for the great support. :)) I followed the instructions of the clearml-serving README on github None . There is one section called 'Optional: advanced setup - S3/GS/Azure access'. Maybe the syntax could be added there? I also saw the additonal link to configure the storage access, but this site focuses on setting up the clearml.conf and I was not sure how and if I could transfer it to the docker .env-file.
A...