Hi EcstaticPelican93
Sure, the model deployment itself (i.e. the serving engine) can be executed on any private network (basically like any other agent)
make sense ?
does the clearml server is a worker i can serve on models?
does the clearml server is a worker i can serve on models?
The serving is done by one of the clearml-agents.
Basically you spin an agent, then this agent is spinning the model serving engine container (fully managed).
(1) install run run clearml-agent (2) run clearml-session CLI to configure and spin the serving engine
i run it on docker and right now only exposing it by -p 8080:8000
ok i have managed to deploy model by thr clearml-serving, now they are runing on the docker container engine (that doesn't have GPU in it) , what is the entrypoints to the model in order to get predictions?