Which would also mean that the system knows which datasets are used in which pipelines etc
Like input
artifacts per Task ?
Which would also mean that the system knows which datasets are used in which pipelines etc
More interested in some way for doing this ssytem wide
time-based, dataset creation, model publish (tag),
Anything you think is missing ?
Good news a dedicated class for exactly that will be out in a few days 🙂
Basically task scheduler and task trigger scheduler, running as a service cloning/launching tasks either based on time (cron alike) or based on a trigger).
wdyt?
Sorry if it was confusing. Was asking if people have setup pipelines automatically triggered on update to datasets
Has anyone done this exact use case - updates to datasets triggering pipelines?
Hi TrickySheep9 seems like this is following a diff thread, am I missing something ?