Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi All, Two Questions:

Hi all, two questions:
I created a Dataset and am using it to train models, what's the recommended way to connect it to my training tasks for future reproducibility? just upload it as an artifact or is there a way to connect its id with the training task? What's the best way of logging my models performance metrics so I can compare between several training runs with different parameters? I don't use any special frameworks, just need to keep track of final accuracy and a few more metrics per run.Thanks!

  
  
Posted 2 years ago
Votes Newest

Answers 4


ok so:
you recommend just saving the dataset id as part of the task configuration? I think I was a bit unclear, my question is how should I report them from the code, they are not caught automatically because they are custom parameters I calculate not as part of any framework, so I wonder if I should report them as artifacts, or maybe scalars? my issue with scalars is that I only have 1 of each type, and the API seems to be oriented toward a series of results of the same type

  
  
Posted 2 years ago

great, thanks!

  
  
Posted 2 years ago

Hi, regarding your questions:
If you create and finalize the dataset, it should upload the file contents to the fileserver (or any other storage you configure). The dataset is an object similar to a task - it has a unique ID You can add metric columns to the experiments table. You can do this by clicking the little cog wheel at the top right of the experiments table. You can also select multiple experiments and compare them (Bottom left on the bar that appears after selecting more than 1 experiment)

  
  
Posted 2 years ago

That's an option This really depends on your usage - if you want those 'custom parameters' be accessible by other tasks, then save them as artifacts. If you only want visibility - then save them as scalars. You have a nice example on usage here: https://github.com/allegroai/clearml/blob/master/examples/reporting/scalar_reporting.py

  
  
Posted 2 years ago
1K Views
4 Answers
2 years ago
one year ago
Tags