Unanswered
And One More Question. How Can I Get Loaded Model In Preporcess Class In Clearml Serving?
we will try to use Triton, but it’s a bit hard with transformer model.
Yes ...
All extra packages we add in serving)
So it should work, you can also run your preprocess class manually from your own machine (for debugging), if you pass to it a local file (basically the downloaded model file from the UI, it should work
it. But it’s maybe not the best solution
Yes... it is not, separating the pre/post to CPU instance and letting triton do the GPU serving is a lot more efficient than using pytorch vanilla
169 Views
0
Answers
2 years ago
one year ago
Tags