Hi @<1686547380465307648:profile|StrongSeaturtle89> , usually you'd either run all locally or all remotely. What's your specific use case?
i will have a try. thank you so much.
Hi @<1686547380465307648:profile|StrongSeaturtle89> , apologies for the delay. I think the best approach would either be having the local dataset available through some network share, or seperating your use-case to ETL, than training (the second can be triggered by new data being available)
I want to design a pipeline: step 1: process the local dataset. 2. upload local dataset to clearml server (self-hosted). 3. start training use this dataset. 4. save model to clearml server.