Unanswered
Looking At Clearml-Serving - Two Questions -
1, What’S The Status Of The Project
2. How Does One Say How A Model Is Loaded And Served Etc? For Example, If I Have A Spacy Ner Model, I Need To Specify Some Custom Code Right?
Found the custom backend aspect of Triton - https://github.com/triton-inference-server/python_backend
Is that the right way?
183 Views
0
Answers
3 years ago
one year ago