Reputation
Badges 1
383 × Eureka!Maybe two thing here:
If Task.init() is called in an already running task, don’t reset auto_connect_frameworks? (if i am understanding the behaviour right) Option to disable these in the clearml.conf
how do you see things being used as the most normal way?
Ah ok there’s only optimizer.stop in the example
The job itself doesn’t have any other param
Yeah, Curious - is a lot of clearml usecases not geared for notebooks?
I am running from noebook and cell has returned
I don’t want to though. Will run it as part of a pipeline
Lot of us are averse to using git repo directly
Is there a good reference to get started with k8s glue?
I guess the question is - I want to use services queue for running services, and I want to do it on k8s
Like it said, it works, but goes into the error loop
AgitatedDove14 - any thoughts?
Is there a good way to get the project of a task?
Found the custom backend aspect of Triton - https://github.com/triton-inference-server/python_backend
Is that the right way?
I am seeing that it still picks up nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04
I guess this is a advantage with docker mode. Will try that out as well sometime.
Yeah got it. Was mainly wondering if k8s glue was meant for this as well or not
AgitatedDove14 - thoughts on this? I remember that it was Draft before, but maybe because it was in a notebook vs now I am running a script?
AgitatedDove14 sounds almost what might be needed, will give it a shot. Thanks, as always 🙂
AgitatedDove14 - on a similar note, using this is it possible to add to requirements of task with task_overrides?
Yes I have multiple lines
