Unanswered
Hi.
Looking Into Clearml Support For Datasets, I'D Like To Understand How To Work With Large Datasets And Cases Where Not All The Data Is Downloaded At Once. (E.G. 1. Each Training Epoch Is Performed On A (Preferably Random) Sample Of The Data That Is Dow
Hi PanickyMoth78 , While the ClearML Datasets are meant to handle cases where the entire metadata fit in memory (or disk), the use-case you're describing is exactly where the HyperDatasets come into play, allowing you to use a backend-supported iterator(s) to (optionally randomly) iterate over your metadata (with automatic fetching and caching of raw data as required), which can also be used of course in cases where data split is required.
181 Views
0
Answers
2 years ago
one year ago