Unanswered
I Have Managed To Deploy Model By Thr Clearml-Serving, Now They Are Runing On The Docker Container Engine (That Doesn'T Have Gpu In It) , What Is The Entrypoints To The Model In Order To Get Predictions?
Assuming this is a followup on:
https://clearml.slack.com/archives/CTK20V944/p1626184974199700?thread_ts=1625407069.458400&cid=CTK20V944
This depends on how you set it with the clearml-serving --endpoint my_model_entrycurl <serving-engine-ip>:8000/v2/models/my_model_entry/versions/1
145 Views
0
Answers
3 years ago
one year ago