SuccessfulKoala55 what’s the right way to do this using the SDK. I use Task.init but the option is there only in Task.create? (I have always been indecisive on when to use init vs create as well). Pointers?
This is for building my model package for inference
One more help - is this something I can pass as task_overrides when creating a pipeline in add_step?
It’s a bit difficult to figure out the exact key. Will try this to see if it works
But it seems to make the current task the data processing task. I don't want it to take over the task.
It did pick it from the task?
Yeah got it, thanks!
Generally like the kedro project and pipeline setup that I have seen so far, but haven’t started using it in anger yet. Been looking at clearml as well, so wanted to check how well these two work together
AgitatedDove14 - does having this template work for updating hte base image:
` spec:
containers:
- image: nvidia/cuda:11.4.1-cudnn8-runtime-ubuntu20.04 `
Was able to use ScriptRequirements and get what I need. thanks!
Thanks for the fast responses as usual AgitatedDove14 🙂
Would adding support for some sort of post task script help? Is something already there?
yeah meant this, within clearml.conf:
logging {} sdk {}
I use a custom helm chart and terraform helm provider for these things
I only see published getting preference, not a way to filter only to published
Thanks CostlyOstrich36
On a related note - is it possible to get things like ${stage_data.artifacts.dataset.url} from within a task rather than passing params in add_step ?
Having a pipeline controller and running actually seems to work as long as i have them as separate notebooks
Got the engine running.
curl <serving-engine-ip>:8000/v2/models/keras_mnist/versions/1What’s the serving-engine-ip supposed to be?