logger.report_scalar("loss-train", "train", iteration=0, value=100)
logger.report_scalar("loss=test", "test", iteration=0, value=200)
notice that the title of the graph is its uniue id, so if you send scalars to with the same "title" they will show on the same graph
logger.report_scalar("loss", "train", iteration=0, value=100)
logger.report_scalar("loss", "test", iteration=0, value=200)
I have 100 experiments and I have to log them and update those experiments every 5 minutes
I will share my script u can see it what I am doing
no i want all of them in the same experiment
but this gives the results in the same graph
and under that there will be three graphs with title as train test and loss
Just call the Task.init before you create the subprocess, that's it 🙂 they will all automatically log to the same Task. You can also call the Task.init again from within the subprocess task, it will not create a new experiment but use the main process experiment.
Like here in the sidebar I am getting three different plots named as loss, train_loss and test_loss
So you want these two on two different graphs ?
but what is happening is it is creating new task under same project with same task name
so, if I call Task.init() before that line there is no need of calling Task.init() on line number 92
Hi @<1523701205467926528:profile|AgitatedDove14> , I wanted to ask you something. Is it possible that we can talk over voice somewhere so that I can explain my problem better?
In the side bar you get the title of the graphs, then when you click on them you can see the diff series on the graphs themselves
and it should log it into the same task and same project
This code will give you one graph titled "loss" with two series: (1) trains (2) loss
i mean all 100 experiments in one project
This code gives me the graph that I displayed above
so what I have done is rather than reading sequentially I am reading those experiments through multiprocessing and for each experiment I am creating new task with specified project_name and task_name
logger.report_scalar(title="loss", series="train", iteration=0, value=100)
logger.report_scalar(title="loss", series="test", iteration=0, value=200)
Just so I understand,
scheduler executes main every 60sec
main spins X sub-processes
Each subprocess needs to report scalars ?
And you want all of them to log into the same experiment ? or do you want an experiment per 60sec (i.e. like the scheduler)