Reputation
Badges 1
383 × Eureka!Will try it out. Pretty impressed 🙂
AgitatedDove14 - yeah wanted to see what’s happening before disabling as I wasn’t sure if this is what’s expected.
AgitatedDove14 - looks like the serving is doing the savemodel stuff?
https://github.com/allegroai/clearml-serving/blob/main/clearml_serving/serving_service.py#L554
This actually ties well with the next version of pipelines we are working on
Is there a way to see a roadmap on such things AgitatedDove14 ?
Yeah but where’s the cache from? does it setup a pip cache anywhere?
I don’t want to though. Will run it as part of a pipeline
Lot of us are averse to using git repo directly
Is there a good reference to get started with k8s glue?
I guess the question is - I want to use services queue for running services, and I want to do it on k8s
Like it said, it works, but goes into the error loop
Found the custom backend aspect of Triton - https://github.com/triton-inference-server/python_backend
Is that the right way?
I am seeing that it still picks up nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04
Yeah got it. Was mainly wondering if k8s glue was meant for this as well or not
AgitatedDove14 - thoughts on this? I remember that it was Draft before, but maybe because it was in a notebook vs now I am running a script?
AgitatedDove14 sounds almost what might be needed, will give it a shot. Thanks, as always 🙂
AgitatedDove14 - on a similar note, using this is it possible to add to requirements of task with task_overrides?
Yes I have multiple lines
Ah thanks for the info.
Yeah mostly. With k8s glue going, want to finally look at clearml-session and how people are using it.
create_task_from_functionI was looking at options to implement this just today, as part of the same remote debugging that I was talking of in this thread