I request is a request from clients who use our endpoint
I want to set up a queue for requests, incoming request will first go to this queue and we can assign which request goes to which worker, and also respond the status of each request to the clients: in queue, being processed, completed, etc.
This seems more related to the ClearML server and not to clearml-serving
So a request is a task to be executed?
What is the use-case to control it? What are you trying to achieve?
Hm, then how can i control this service?
i see the architecture map for clearml-serving have kafka part, and when i run an example following the readme file, i can also see a kafka container running on my machine, but i couldn't find instruction to access that service, while you guys have instruction for using other services, such as prothemeus and grafana
That's because it's used internally, and not intended for external access
in any case this is not integrated as you explained
sorry the question is a bit vague. i just want to know if clearml already intergrated kafka, or do i have to implement it myself.
Hi @<1580367711848894464:profile|ApprehensiveRaven81> , what would you like to integrate exactly? What kind of messages these would be and where do you expect to see them?