i mean all 100 experiments in one project
Are you using tensorboard or do you want to log directly to trains ?
so, like if validation loss appears then there will be three sub-tags under one main tag loss
like if u see in above image my project name is abcd18 and under that there are experiments Experiment1, Experiment2 etc.
Like here in the sidebar I am getting three different plots named as loss, train_loss and test_loss
like in the sidebar there should be a title called "loss" and under that two different plots should be there named as "train_loss" and "test_loss"
No. since you are using Pool. there is no need to call task init again. Just call it once before you create the Pool, then when you want to use it, just do task = Task.current_task()
and it should log it into the same task and same project
Hi @<1523701205467926528:profile|AgitatedDove14> , I wanted to ask you something. Is it possible that we can talk over voice somewhere so that I can explain my problem better?
See on line 212 I am calling one function "combined" with some arguments
def combined(path,exp_name,project_name):
temp = Task.create(task_name="exp_name")
logger = temp.current_logger()
logger.report_scalar()
def main():
task=Task.init(project_name="test")
[pool.apply_async(combined, args = (row['Path'], row['exp_name'], row['project_name'])) for index,row in temp_df.iterrows()]
scheduler = BlockingScheduler()
scheduler.add_job(main, 'interval', seconds=60, max_instances=3)
scheduler.start()
You can do:task = Task.get_task(task_id='uuid_of_experiment')
task.get_logger().report_scalar(...)
Now the only question is who will create the initial Task, so that the others can report to it. Do you have like a "master" process ?
If you one each "main" process as a single experiment, just don't call Task.init in the scheduler
logger.report_scalar("loss-train", "train", iteration=0, value=100)
logger.report_scalar("loss=test", "test", iteration=0, value=200)
notice that the title of the graph is its uniue id, so if you send scalars to with the same "title" they will show on the same graph
This code gives me the graph that I displayed above
then if there are 10 experiments then I have to call Task.create() for those 10 experiments
And you want all of them to log into the same experiment ? or do you want an experiment per 60sec (i.e. like the scheduler)
Just so I understand,
scheduler executes main every 60sec
main spins X sub-processes
Each subprocess needs to report scalars ?
Create one experiment (I guess in the scheduler)
task = Task.init('test', 'one big experiment')
Then make sure the the scheduler creates the "main" process as subprocess, basically the default behavior)
Then the sub process can call Task.init and it will get the scheduler Task (i.e. it will not create a new task). Just make sure they all call Task init with the same task name and the same project name.
Can my request be made as new feature so that we can tag same type of graphs under one main tag
so , it will create a task when i will run it first time
no i want all of them in the same experiment
my scheduler will be running every 60 seconds and calling main function
@<1523720500038078464:profile|MotionlessSeagull22> you cannot have two graphs with the same title, the left side panel presents graph titles. That means that you cannot have a title=loss series=train & title=loss series=test on two diff graphs, they will always be displayed on the same graph.
That said, when comparing experiments, all graph pairs (i.e. title+series) will be displayed as a single graph, where the diff series are the experiments.