each subprocess logs one experiment as task
logger.report_scalar(title="loss", series="train", iteration=0, value=100)
logger.report_scalar(title="loss", series="test", iteration=0, value=200)
then if there are 100 experiments how it will create 100 tasks?
Are you using tensorboard or do you want to log directly to trains ?
See on line 212 I am calling one function "combined" with some arguments
You can always click on the name of the series and remove it for display.
Why would you need three graphs?
This code gives me the graph that I displayed above
no i want all of them in the same experiment
so what I have done is rather than reading sequentially I am reading those experiments through multiprocessing and for each experiment I am creating new task with specified project_name and task_name
I have to create a main task for example named as main
and under that there will be three graphs with title as train test and loss
logger.report_scalar("loss-train", "train", iteration=0, value=100)
logger.report_scalar("loss=test", "test", iteration=0, value=200)
notice that the title of the graph is its uniue id, so if you send scalars to with the same "title" they will show on the same graph
Like here in the sidebar I am getting three different plots named as loss, train_loss and test_loss
And you want all of them to log into the same experiment ? or do you want an experiment per 60sec (i.e. like the scheduler)
i mean all 100 experiments in one project
but this gives the results in the same graph
Hi @<1523701205467926528:profile|AgitatedDove14> , I wanted to ask you something. Is it possible that we can talk over voice somewhere so that I can explain my problem better?
and that function creates Task and log them
Just so I understand,
scheduler executes main every 60sec
main spins X sub-processes
Each subprocess needs to report scalars ?
like if u see in above image my project name is abcd18 and under that there are experiments Experiment1, Experiment2 etc.
Can my request be made as new feature so that we can tag same type of graphs under one main tag
then my combined function create a sub task using Task.create(task_name=exp_name)
main will initialize parent task and then my multiprocessing occurs which call combined function with parameters as project_name and exp_name
@<1523701205467926528:profile|AgitatedDove14> I want to log directly to trains using logger.report_scalar