Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi Everyone! I Have A Question Regarding A Specific Use Case For Tasks. To Run Hyperparam Optimization I Have A Function That Evaluates A Model On A Bunch Of Videos And Outputs A Metric. I Would Like To Log Somewhere The Results, So That I Can Then Easil

Hi everyone!

I have a question regarding a specific use case for tasks. To run hyperparam optimization I have a function that evaluates a model on a bunch of videos and outputs a metric. I would like to log somewhere the results, so that I can then easily retrieve the hyperparams -> metric mapping and do some analysis. I'd like to avoid creating a task for each function execution, since I can easily get to ~500 runs per model. I'd also want to re-use the results, so if I run again the hyperparam search for another 500 runs, I end up with 1000 total experiments.

I was thinking about having a single task per model that gets updated every time I re-run the search. In it I register a DataFrame and progressively add rows for each experiment, but this doesn't seems to work properly when re-using the task (I opened an issue on that: None ). I'm also not sure this would work when parallelizing the execution and having multiple workers write to the same dataframe.

Any better ideas? Thanks for any input!

  
  
Posted one year ago
Votes Newest

Answers 3


So the issue is that I would like too keep the list of hyperparams and metrics, if I clean them up then I would lose them. But I agree that I might be overthinking it

  
  
Posted one year ago

Hi @<1570220858075516928:profile|SlipperySheep79> , I think you might be over complicating it. To keep things clean - even if you have a 1000 experiments, you can run the optimization in a designated project and after it finishes running and you have the results you can simply clean everything up.

What do you think?

  
  
Posted one year ago

Well after you finish and you select your best - move it somewhere else and then cull the rest

  
  
Posted one year ago
1K Views
3 Answers
one year ago
one year ago
Tags
Similar posts