Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Two Questions About Datasets: Question 1: Are Parallel Writes To A Dataset With The Same Version Possible? Is The Way To Go, To Have A Task, Which Creates A Dataset Object, Which In Turn Is Passed As Artifact To The Subsequent Ingestion Tasks? After The P

Two questions about datasets:
question 1: are parallel writes to a dataset with the same version possible? Is the way to go, to have a task, which creates a dataset object, which in turn is passed as artifact to the subsequent ingestion tasks? After the parallel ingestion, is it possible, to finalize the dataset creation in a follow up task? Is that the way to go?
question 2: If a dataset has been created, files have been added and the dataset has been finalized. Whats the recommended way to append the dataset in a future version? Should the Dataset.get(...).get_local_copy() than create a new dataset and add the file of the local copy and the new files to the dataset and finalize it, or should I add the new files to the directory, where the files of the dataset have been copied to, and call sync? I guess in that case I would to have to call get_mutual_local_copy(). In second case, I guess, only references are passed for the old files, whereas in the first scenario, all files would be added as files, which might blow up the storage. Or should I add child datasets as proposed in (urbansound_sample) and (MNIST sample) ?

  
  
Posted 2 months ago
Votes Newest

Answers 6


Hi @<1661542579272945664:profile|SaltySpider22> I'm not sure I understand the answer to my parallel quesion

  
  
Posted 2 months ago

yes, or (because I deployed clearml using helm in kubernetes) from the same machine, but multiple pods (tasks).

Oh now I see, long story short, no 😞 the correct way of doing that is every node/pod creates it's own dataset,
then when you are done, you create a new version with the X datasets that you created as parents, the newly created version is just "meta" it basically tells the system how to combine the previously generated datasets (i.e. no data is actually re-uploaded).
Version tree should look something like

 [x]
  |
+-+--+---+
|    |   |
[a] [b] [c]
  
  
Posted 2 months ago

Hi @<1661542579272945664:profile|SaltySpider22>

question 1: are parallel writes to a dataset with the same version possible?

When you are saying parallel what do you mean? from multiple machines ?

Whats the recommended way to append the dataset in a future version?

Once a dataset was finalized the only way to add files is to add another version that inherits from the previous one (i.e. the finalized version becomes the parent of the new version)
If you are worried about multiple versions, just like in git you have squeeze 🙂

passing Dataset artifacts between tasks seems to be not possible,

The correct way would be to pas the Dataset ID, then other task would simple get it with Dataset.get
No need to worry about re-download, everything is automatically cached.
Make sense ?

  
  
Posted 2 months ago

@<1523701205467926528:profile|AgitatedDove14>

When you are saying parallel what do you mean? from multiple machines ?

  
  
Posted 2 months ago

Hey @<1523701205467926528:profile|AgitatedDove14> ,
sorry, I am quite new to slack... forgot to submit my changes of the answer...

When you are saying parallel what do you mean? from multiple machines ?

yes, or (because I deployed clearml using helm in kubernetes) from the same machine, but multiple pods (tasks).

Once a dataset was finalized the only way to add files is to add another version that inherits from the previous one (i.e. the finalized version becomes the parent of the new version)
If you are worried about multiple versions, just like in git you have squeeze

okay, great. thank you so much!

The correct way would be to pas the Dataset ID, then other task would simple get it with Dataset.get
No need to worry about re-download, everything is automatically cached.

Sounds good, thanks for clarification.

  
  
Posted 2 months ago

to question 1:
passing Dataset artifacts between tasks seems to be not possible, getting the following error message:

TypeError: cannot pickle '_thread.lock' object. 

So i guess its not possible to upload files from different tasks in parallel to the dataset, before finalizing it.

  
  
Posted 2 months ago
163 Views
6 Answers
2 months ago
2 months ago
Tags
Similar posts