Unanswered
I Have Managed To Deploy Model By Thr Clearml-Serving, Now They Are Runing On The Docker Container Engine (That Doesn'T Have Gpu In It) , What Is The Entrypoints To The Model In Order To Get Predictions?
can i run it on an agent that doesn't have gpu?
Sure this is fully supported
when i run clearml-serving it throughs me an error "please provide specific config.pbtxt definion"
Yes this is a small file that tells the Triton server how load the model:
Here is an example:
https://github.com/triton-inference-server/server/blob/main/docs/examples/model_repository/inception_graphdef/config.pbtxt
148 Views
0
Answers
3 years ago
one year ago
Tags