With the human activity being a step where some manual validations, annotations, feedback might be required
Hi TrickySheep9 , can you be a bit more specific?
I think RoughTiger69 was discussing this exact scenario
https://clearml.slack.com/archives/CTK20V944/p1629885416175500?thread_ts=1629881415.172600&cid=CTK20V944
wdyt?
As in run a training experiment, then a test/validation experiment to choose best model etc etc and also have a human validate sample results via annotations all as part of a pipeline
AgitatedDove14 sounds almost what might be needed, will give it a shot. Thanks, as always 🙂