Reputation
Badges 1
383 × Eureka!I am running from noebook and cell has returned
I don’t want to though. Will run it as part of a pipeline
Lot of us are averse to using git repo directly
Is there a good reference to get started with k8s glue?
I guess the question is - I want to use services queue for running services, and I want to do it on k8s
Like it said, it works, but goes into the error loop
Is there a good way to get the project of a task?
Found the custom backend aspect of Triton - https://github.com/triton-inference-server/python_backend
Is that the right way?
I am seeing that it still picks up nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04
Yeah got it. Was mainly wondering if k8s glue was meant for this as well or not
AgitatedDove14 - thoughts on this? I remember that it was Draft before, but maybe because it was in a notebook vs now I am running a script?
AgitatedDove14 sounds almost what might be needed, will give it a shot. Thanks, as always 🙂
AgitatedDove14 - on a similar note, using this is it possible to add to requirements of task with task_overrides?
Yes I have multiple lines
dataset1 -> process -> dataset2
Ah thanks for the info.
Yeah mostly. With k8s glue going, want to finally look at clearml-session and how people are using it.
Does a pipeline step behave differently?
That makes sense - one part I am confused on is - The Triton engine container hosts all the models right? Do we launch multiple gorups of these in different projects?
create_task_from_functionI was looking at options to implement this just today, as part of the same remote debugging that I was talking of in this thread
When did this PipelineDecorator come. Looks interesting 🙂
