Which would also mean that the system knows which datasets are used in which pipelines etc
Which would also mean that the system knows which datasets are used in which pipelines etc
Like input
artifacts per Task ?
Sorry if it was confusing. Was asking if people have setup pipelines automatically triggered on update to datasets
Good news a dedicated class for exactly that will be out in a few days 🙂
Basically task scheduler and task trigger scheduler, running as a service cloning/launching tasks either based on time (cron alike) or based on a trigger).
wdyt?
time-based, dataset creation, model publish (tag),
Anything you think is missing ?
More interested in some way for doing this ssytem wide
Has anyone done this exact use case - updates to datasets triggering pipelines?
Hi TrickySheep9 seems like this is following a diff thread, am I missing something ?