Hi @<1547028031053238272:profile|MassiveGoldfish6> , the expected behavior would be pulling only the 100 files 🙂
does this apply if im using an external S3 storage? because the stored data appears as a large zip file in S3
I have a question regarding Dataset versioning
Lets say I create Dataset A which has 1000 files then I create Dataset B with Dataset A as it's parent. All I did was delete 900 files that were corrupted.
When I pull a local copy of Dataset B, do I just pull 100 files from the server? or do I pull 1000 and then clearml behind the scenes deletes 900 to leave me with 100?
I want to know if the parent dataset is huge relative to the child dataset. Will i inadvertently pull the parent dataset from the remote to then downsize it to the child or do I just pull the child from the server?
Hi @<1547028031053238272:profile|MassiveGoldfish6> , the expected behavior would be pulling only the 100 files 🙂
does this apply if im using an external S3 storage? because the stored data appears as a large zip file in S3