Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hello Everyone, Is There A Way To Log Scalars That Were Already Written In A Tensorboard File Other Than Iterating Over All Values?

Hello everyone, is there a way to log scalars that were already written in a tensorboard file other than iterating over all values?

  
  
Posted one year ago
Votes Newest

Answers 5


Hi @<1620955143929335808:profile|PleasantStork44> , currently this is not possible directly. However if you have a task with scalars, you can theoretically get all task events and resend them for the new task (although this is not part of the SDK official function interface and requires internal knowledge of the SDK implementation)

  
  
Posted one year ago

@<1523701087100473344:profile|SuccessfulKoala55> I also encountered a problem with connect_configuration(), I pass it a dict at the beginning of training, during training I can see my configs in "General" , but once the pytorch model saves the checkpoints , the configs are gone!

  
  
Posted one year ago

I added a name and it worked fine 👍

  
  
Posted one year ago

ok, thankss

  
  
Posted one year ago

or just copy scalars from an old task to a new one ( I want to complete training from previous checkpoints, and I need to create a new task and move the scalars of the old iterations to the new one first)

  
  
Posted one year ago
781 Views
5 Answers
one year ago
one year ago
Tags