Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi All! Currently I Am Trying To Create A Tool That Can Perform Certain Operations On Dataset Ids, This Is A Skeleton Of What I Have In Mind (Based On The Examples):

Hi all! Currently I am trying to create a tool that can perform certain operations on dataset ids, this is a skeleton of what I have in mind (based on the examples):
` from argparse import ArgumentParser
from clearml import Dataset

adding command line interface, so it is easy to use

parser = ArgumentParser()
parser.add_argument('--dataset', default='aayyzz', type=str, help='Dataset ID to train on')
parser.add_argument('--height', default=1.6, type=float, help='Minimum height')
args = parser.parse_args()

getting a local copy of the dataset

parent_dataset = Dataset.get(dataset_id=args.dataset)
dataset_folder = parent_dataset.get_mutable_local_copy()

Here I filter files on dataset_folder based on the height.

...

Create a new dataset and upload files (?)

child_dataset = Dataset.create(..., parent_datasets=[parent_dataset])
child_dataset.add_files(dataset_folder)
child_dataset.upload() I just wanted to know if this is the best approach or there are other methods on Dataset that can help. Some questions regarding the approach: Will it generate to copies of the dataset even if the operation only removes some files from dataset_folder ? If some files are changed will there be any difference? If I have the files on aws, gcp, etc.. does get_mutable_local_copy() ` download every time the files? or does it work like the artifacts where there is caching. Assume I run two different operations from the same parent dataset i,e filter by height and filter by age.

  
  
Posted 3 years ago
Votes Newest

Answers 2


Hi GrievingTurkey78
First, I would look at the CLI clearml-data as a baseline for implementing such a tool:
Docs:
https://github.com/allegroai/clearml/blob/master/docs/datasets.md
Implementation :
https://github.com/allegroai/clearml/blob/master/clearml/cli/data/main.py
Regrading your questions:
(1) No, a new dataset version will only store the diff from the parent (if files are removed it stored the metadata that says the file was removed)
(2) Yes any get operation will download unzip and merge the files into the local storage, for easier access. The the 'mutable` copy will create a copy of the files, where as the "regular" get will create softlinks to the local cached copy of the unzipped files

  
  
Posted 3 years ago

Thanks AgitatedDove14

  
  
Posted 3 years ago
930 Views
2 Answers
3 years ago
one year ago
Tags