Reputation
Badges 1
15 × Eureka!i see the architecture map for clearml-serving have kafka part, and when i run an example following the readme file, i can also see a kafka container running on my machine, but i couldn't find instruction to access that service, while you guys have instruction for using other services, such as prothemeus and grafana
I request is a request from clients who use our endpoint
@<1523701070390366208:profile|CostlyOstrich36> ClearML offeres a free tier server, right? My question is
- Can I deploy to this server? I.e use hardware from this server instead of from my machine.
- If so, when i do deploy on ClearML server, how can i get a public url to run inference?
oh get it, thank you guys, i am new to this model deployment thing so your comment helped a lot
@<1523701087100473344:profile|SuccessfulKoala55> i use the free-tier hosting
I am running the example code, so i guess its running on my local machine?
I want to set up a queue for requests, incoming request will first go to this queue and we can assign which request goes to which worker, and also respond the status of each request to the clients: in queue, being processed, completed, etc.
Hm, then how can i control this service?
sorry the question is a bit vague. i just want to know if clearml already intergrated kafka, or do i have to implement it myself.
another problem is that i just want to use clearml-serving to serve an already trained model, the training process is not tracked my clearml, meaning the model is not registered on the models
tab. is there any way to use clearml-serving to serve a model that is not tracked by clearml? @<1523701070390366208:profile|CostlyOstrich36>
using the API interface, users should be able to upload an image for the model to run inference on and get the result image
i'm trying to build an image segmentation tool, so I expect that the front end will allow users to upload images, get their segmented images & option to annotate the images if the results are not good enough
is there any chance you know an example of a model that's deployed with clearml-serving and have a custom front-end?/
@<1523701070390366208:profile|CostlyOstrich36> how should i implement my own frontend? i mean, if was using fastapi, I can imagine coding HTML files and then link them to the specific URL endpoint, but with ClearML, I don't know where should I put the code for my front-end.