Reputation
Badges 1
383 × Eureka!Does a pipeline step behave differently?
Thanks for the confirmation.
Yes using clearml-data.
Can I pass a s3 path to ds.add_files() essentially so that I can directly store a dataset without having to get the files to local and then upload again. Makes sense?
Will try it out. Pretty impressed 🙂
Ah…that’s great!
Initially it was complaining about it, but then when I did the connect_configuration it started working
AgitatedDove14 is it possible to get the pipeline task running a step in a step? Is task.parent something that could help?
No, all of them completed!
For different workloads, I need to habe different cluster scaler rules and account for different gpu needs
Would this be a good use case to have?
Running multiple k8s_daemon rightt? k8s_daemon("1xGPU")
and k8s_daemon('cpu')
right?
This is for building my model package for inference
Ok code suggests so. Looking for more powerful pipeline scheduling like on datasets publish, actions on model publish etc
PipelineController with 1 task. That 1 task passed but the pipeline says running
Having a pipeline controller and running actually seems to work as long as i have them as separate notebooks
It’s essentially this now:
from clearml import Task print(Task.current_task())
A channel here would be good too 🙂
Was able to use ScriptRequirements
and get what I need. thanks!
Yeah, Curious - is a lot of clearml usecases not geared for notebooks?
AgitatedDove14 - mean this - says name=None but text says default is General.
AgitatedDove14 - looks like the serving is doing the savemodel stuff?
https://github.com/allegroai/clearml-serving/blob/main/clearml_serving/serving_service.py#L554
Lot of us are averse to using git repo directly
You mean the job with the exact same arguments ?
Yes
But it seems to make the current task the data processing task. I don't want it to take over the task.