Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
In Relation To Pytorch Lightning V1.X, Usage In Combination With Trains Has Become Much Smoother (Just Pure Tensorboard). However, When Checking The "Configuration" Tab Of An Experiment, It'S Empty. How Do I Get Trains To Log The Hyperparameters? I'Ve Tr

In relation to PyTorch Lightning v1.x, usage in combination with Trains has become much smoother (just pure TensorBoard).
However, when checking the "Configuration" tab of an experiment, it's empty.
How do I get Trains to log the hyperparameters?

I've tried to do it manually like:
Task.current_task().set_user_properties(model.hparams)However, this returns False . I assume not everything in hparams (which is a dict) is supported by Trains, but it would be nice if at least what can be understood would be registered.
Or am I using the wrong function here?

  
  
Posted 3 years ago
Votes Newest

Answers 5


Hi DefeatedCrab47 ,

You can set the HP with a dict, like:

Task.current_task().set_user_properties( { "property_name": {"description": "This is a user property", "value": "property value"}, "another_property_name": {"description": "This is another user property", "value": "another value"}, "yet_another_property_name": "some value" } )or list of dicts, like:

Task.current_task().set_user_properties( [ { "name": "property_name", "description": "This is a user property", "value": "property value" }, { "name": "another_property_name", "description": "This is another user property", "value": "another value" } ] )
can one of those do the trick for you?

  
  
Posted 3 years ago

As there are quite some hparams, which also change depending on the experiment, I was hoping there was some automatic way of doing it?

For example that it will try to find all dict entries that match "yet_another_property_name": "some value" , and ignore those that don't.
The value has to be converted to a string btw?

  
  
Posted 3 years ago

DefeatedCrab47 can you share model.hparams format?

  
  
Posted 3 years ago

You can send "yet_another_property_name": 1 too, or you can do
"another_property_name": {"description": "This is another user property", "value": "1", "type": "int"}

  
  
Posted 3 years ago

DefeatedCrab47 If I remember correctly v1+ has their arguments coming from argparse .
Are you using this feature ? 2. How do you set the TB HParam ? Currently Trains does not support TB HParams, the reason is the set of HParams needs to match a single experiment. Is that your case?

  
  
Posted 3 years ago
562 Views
5 Answers
3 years ago
one year ago
Tags