Reputation
Badges 1
15 × Eureka!Thanks CostlyOstrich36
hmm, i was speaking from a production point of view, i thought there will be some hooks for deploying where the integration with k8s was also taken care automatically.
AFAIK, i have to create a deployment of this container and add an ingress on top of it. In the architecture diagram in github, this seems to be something that is already baked in , which is what caused confusion. Curious to know your thoughts on this.
@<1523701087100473344:profile|SuccessfulKoala55> I saw in the examples one case of engine being passed as custom
.
My requirement is the need for supporting let's say other frameworks like spacy
. So I was thinking maybe i could create a pipeline that does the model load and inference and pass that pipeline. I am still figuring out the ecosystem, would something like that make sense?
Hmm, thanks @<1523701087100473344:profile|SuccessfulKoala55> what would be the right way that you would recommend for adding support for other models/frameworks like spacy
.
Would you recommend adding other models by sending PR in line with the lightgbm example here
None
or use the custom
option and move the logic for loading the model to preprocess
or `proce...
no , it didn't kill the process.
@<1523701087100473344:profile|SuccessfulKoala55> , from the init-containers i could see that it is waiting for mongodb to start.
@<1523701087100473344:profile|SuccessfulKoala55> , trying to use external mongo. In the values.yaml
I see these two fields
mongodbConnectionStringAuth: ""
mongodbConnectionStringBackend: ""
can you please help with what should go in these ?
@<1523701087100473344:profile|SuccessfulKoala55> , using this
allegroai
It's been stuck in initialization for a long time.
Thanks @<1523701205467926528:profile|AgitatedDove14> . For now i have forked the clearml-serving
locally and added an engine for spacy
. It is working fine. Yeah, i think some documentation and a good example would make it more visible. An example for something like spacy would be useful for the community.
Yeah GPU utilization was 100% . I cleaned it up using nvidia-smi
and killing the process. But i was expecting the clean up to happen automatically since the process failed.
sure Thanks SuccessfulKoala55 Not sure if is a one off event. I will try to reproduce it.
Got this working after using the preprocess step similar to the sklearn example to convert the input explicitly to list.
Thanks @<1567321739677929472:profile|StoutGorilla30> and @<1523701070390366208:profile|CostlyOstrich36> my question was from the perspective of agent. I am guessing agent.binary
is what i would have to set.
None