Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
, This Is A Great Tool For Visualizing All Your Experiments. I Wanted To Know That When I Am Logging Scalar Plots With Title As Train Loss And Test Loss They Are Getting Diplayed As Train Loss And Test Loss In The Scalar Tab. I Wanted That The Title Shoul

@<1523701205467926528:profile|AgitatedDove14> , this is a great tool for visualizing all your experiments. I wanted to know that when I am logging scalar plots with title as train loss and test loss they are getting diplayed as train loss and test loss in the scalar tab.
I wanted that the title should be loss and under that I should get these two differnet graphs train loss and test loss. Is this possible?
image

  
  
Posted 5 years ago
Votes Newest

Answers 68


and it should log it into the same task and same project

  
  
Posted 5 years ago

so what I have done is rather than reading sequentially I am reading those experiments through multiprocessing and for each experiment I am creating new task with specified project_name and task_name

  
  
Posted 5 years ago

This code will give you one graph titled "loss" with two series: (1) trains (2) loss

  
  
Posted 5 years ago

Oh I got it.

  
  
Posted 5 years ago

Can my request be made as new feature so that we can tag same type of graphs under one main tag

Sure, open a Git Issue :)

  
  
Posted 5 years ago

Just so I understand,
scheduler executes main every 60sec
main spins X sub-processes
Each subprocess needs to report scalars ?

  
  
Posted 5 years ago

def combined(path,exp_name,project_name):
temp = Task.create(task_name="exp_name")

logger = temp.current_logger()
logger.report_scalar()

def main():
task=Task.init(project_name="test")
[pool.apply_async(combined, args = (row['Path'], row['exp_name'], row['project_name'])) for index,row in temp_df.iterrows()]

scheduler = BlockingScheduler()
scheduler.add_job(main, 'interval', seconds=60, max_instances=3)
scheduler.start()

  
  
Posted 5 years ago

No. since you are using Pool. there is no need to call task init again. Just call it once before you create the Pool, then when you want to use it, just do task = Task.current_task()

  
  
Posted 5 years ago
124K Views
68 Answers
5 years ago
one year ago
Tags