Unanswered
Hi, I Noted That Clearml-Serving Does Not Support Spacy Models Out Of The Box And That Clearml-Serving Only Supports Following;
Agreed! I was trying to avoid this, because I wanted that each tenant acess directly the serving endpoint, to maximize performance. But I guess I will loose just a few ms separating auth layer and execution layer.
Besides that, what are your impressions on these serving engines? Are they much better than just creating my own API + ONNX or even my own API + normal Pytorch inference?
For example, if I decide to use clearml-serving --engine custom
, what would be the drawbacks?
169 Views
0
Answers
2 years ago
one year ago