so, like if validation loss appears then there will be three sub-tags under one main tag loss
You can do:task = Task.get_task(task_id='uuid_of_experiment')
task.get_logger().report_scalar(...)
Now the only question is who will create the initial Task, so that the others can report to it. Do you have like a "master" process ?
You can always click on the name of the series and remove it for display.
Why would you need three graphs?
i mean all 100 experiments in one project
This code gives me the graph that I displayed above
See on line 212 I am calling one function "combined" with some arguments
then if there are 10 experiments then I have to call Task.create() for those 10 experiments
then if there are 100 experiments how it will create 100 tasks?
Like here in the sidebar I am getting three different plots named as loss, train_loss and test_loss
I have 100 experiments and I have to log them and update those experiments every 5 minutes
and under that there will be three graphs with title as train test and loss
And you want all of them to log into the same experiment ? or do you want an experiment per 60sec (i.e. like the scheduler)
okay, Thanks @<1523701205467926528:profile|AgitatedDove14> for the help.
Hi @<1523701205467926528:profile|AgitatedDove14> , I wanted to ask you something. Is it possible that we can talk over voice somewhere so that I can explain my problem better?
logger.report_scalar(title="loss", series="train", iteration=0, value=100)
logger.report_scalar(title="loss", series="test", iteration=0, value=200)
Just so I understand,
scheduler executes main every 60sec
main spins X sub-processes
Each subprocess needs to report scalars ?
It will not create another 100 tasks, they will all use the main Task. Think of it as they "inherit" it from the main process. If the main process never created a task (i.e. no call to Tasl.init) then they will create their own tasks (i.e. each one will create its own task and you will end up with 100 tasks)
So you want these two on two different graphs ?
Create one experiment (I guess in the scheduler)
task = Task.init('test', 'one big experiment')
Then make sure the the scheduler creates the "main" process as subprocess, basically the default behavior)
Then the sub process can call Task.init and it will get the scheduler Task (i.e. it will not create a new task). Just make sure they all call Task init with the same task name and the same project name.
main will initialize parent task and then my multiprocessing occurs which call combined function with parameters as project_name and exp_name
In the side bar you get the title of the graphs, then when you click on them you can see the diff series on the graphs themselves
No. since you are using Pool. there is no need to call task init again. Just call it once before you create the Pool, then when you want to use it, just do task = Task.current_task()
so what I have done is rather than reading sequentially I am reading those experiments through multiprocessing and for each experiment I am creating new task with specified project_name and task_name