Assuming this is a followup on:
https://clearml.slack.com/archives/CTK20V944/p1626184974199700?thread_ts=1625407069.458400&cid=CTK20V944
This depends on how you set it with the clearml-serving --endpoint my_model_entrycurl <serving-engine-ip>:8000/v2/models/my_model_entry/versions/1
can i run it on an agent that doesn't have gpu?
how do i run it without the file? for i have no gpu, or config the file so that the clearml will not throw the error
when i run clearml-serving it throughs me an error "please provide specific config.pbtxt definion"
can i run it on an agent that doesn't have gpu?
Sure this is fully supported
when i run clearml-serving it throughs me an error "please provide specific config.pbtxt definion"
Yes this is a small file that tells the Triton server how load the model:
Here is an example:
https://github.com/triton-inference-server/server/blob/main/docs/examples/model_repository/inception_graphdef/config.pbtxt