Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
, This Is A Great Tool For Visualizing All Your Experiments. I Wanted To Know That When I Am Logging Scalar Plots With Title As Train Loss And Test Loss They Are Getting Diplayed As Train Loss And Test Loss In The Scalar Tab. I Wanted That The Title Shoul

@<1523701205467926528:profile|AgitatedDove14> , this is a great tool for visualizing all your experiments. I wanted to know that when I am logging scalar plots with title as train loss and test loss they are getting diplayed as train loss and test loss in the scalar tab.
I wanted that the title should be loss and under that I should get these two differnet graphs train loss and test loss. Is this possible?
image

  
  
Posted 4 years ago
Votes Newest

Answers 68


i mean all 100 experiments in one project

  
  
Posted 4 years ago

Are you using tensorboard or do you want to log directly to trains ?

  
  
Posted 4 years ago

Create one experiment (I guess in the scheduler)
task = Task.init('test', 'one big experiment')
Then make sure the the scheduler creates the "main" process as subprocess, basically the default behavior)
Then the sub process can call Task.init and it will get the scheduler Task (i.e. it will not create a new task). Just make sure they all call Task init with the same task name and the same project name.

  
  
Posted 4 years ago

It will not create another 100 tasks, they will all use the main Task. Think of it as they "inherit" it from the main process. If the main process never created a task (i.e. no call to Tasl.init) then they will create their own tasks (i.e. each one will create its own task and you will end up with 100 tasks)

  
  
Posted 4 years ago

so , it will create a task when i will run it first time

  
  
Posted 4 years ago

You can do:
task = Task.get_task(task_id='uuid_of_experiment')
task.get_logger().report_scalar(...)

Now the only question is who will create the initial Task, so that the others can report to it. Do you have like a "master" process ?

  
  
Posted 4 years ago

@<1523701205467926528:profile|AgitatedDove14> I want to log directly to trains using logger.report_scalar

  
  
Posted 4 years ago

then if there are 10 experiments then I have to call Task.create() for those 10 experiments

  
  
Posted 4 years ago

now after 1st iteration is completed then after 5 minutes my script runs automatically and then again it logs into trains server

  
  
Posted 4 years ago

If you one each "main" process as a single experiment, just don't call Task.init in the scheduler

  
  
Posted 4 years ago

logger.report_scalar(title="loss", series="train", iteration=0, value=100)
logger.report_scalar(title="loss", series="test", iteration=0, value=200)

  
  
Posted 4 years ago

so, like if validation loss appears then there will be three sub-tags under one main tag loss

  
  
Posted 4 years ago

No. since you are using Pool. there is no need to call task init again. Just call it once before you create the Pool, then when you want to use it, just do task = Task.current_task()

  
  
Posted 4 years ago

so, if I call Task.init() before that line there is no need of calling Task.init() on line number 92

  
  
Posted 4 years ago

def combined(path,exp_name,project_name):
temp = Task.create(task_name="exp_name")

logger = temp.current_logger()
logger.report_scalar()

def main():
task=Task.init(project_name="test")
[pool.apply_async(combined, args = (row['Path'], row['exp_name'], row['project_name'])) for index,row in temp_df.iterrows()]

scheduler = BlockingScheduler()
scheduler.add_job(main, 'interval', seconds=60, max_instances=3)
scheduler.start()

  
  
Posted 4 years ago

like in the sidebar there should be a title called "loss" and under that two different plots should be there named as "train_loss" and "test_loss"

  
  
Posted 4 years ago

You can always click on the name of the series and remove it for display.
Why would you need three graphs?

  
  
Posted 4 years ago

Like here in the sidebar I am getting three different plots named as loss, train_loss and test_loss

  
  
Posted 4 years ago

its like main title will be loss

  
  
Posted 4 years ago

I have 100 experiments and I have to log them and update those experiments every 5 minutes

  
  
Posted 4 years ago

Sure

  
  
Posted 4 years ago

yes

  
  
Posted 4 years ago

and that function creates Task and log them

  
  
Posted 4 years ago

like if u see in above image my project name is abcd18 and under that there are experiments Experiment1, Experiment2 etc.

  
  
Posted 4 years ago

and then log using logger

  
  
Posted 4 years ago

image

  
  
Posted 4 years ago

image

  
  
Posted 4 years ago

so what I have done is rather than reading sequentially I am reading those experiments through multiprocessing and for each experiment I am creating new task with specified project_name and task_name

  
  
Posted 4 years ago

This code gives me the graph that I displayed above

  
  
Posted 4 years ago

what changes should I make here?

  
  
Posted 4 years ago
28K Views
68 Answers
4 years ago
8 months ago
Tags