You can do:task = Task.get_task(task_id='uuid_of_experiment')
task.get_logger().report_scalar(...)
Now the only question is who will create the initial Task, so that the others can report to it. Do you have like a "master" process ?
so, like if validation loss appears then there will be three sub-tags under one main tag loss
And you want all of them to log into the same experiment ? or do you want an experiment per 60sec (i.e. like the scheduler)
I have 100 experiments and I have to log them and update those experiments every 5 minutes
then if there are 10 experiments then I have to call Task.create() for those 10 experiments
Just so I understand,
scheduler executes main every 60sec
main spins X sub-processes
Each subprocess needs to report scalars ?
like if u see in above image my project name is abcd18 and under that there are experiments Experiment1, Experiment2 etc.
so I want loss should be my main title and I want two different graphs of train and test loss under that loss
def combined(path,exp_name,project_name):
temp = Task.create(task_name="exp_name")
logger = temp.current_logger()
logger.report_scalar()
def main():
task=Task.init(project_name="test")
[pool.apply_async(combined, args = (row['Path'], row['exp_name'], row['project_name'])) for index,row in temp_df.iterrows()]
scheduler = BlockingScheduler()
scheduler.add_job(main, 'interval', seconds=60, max_instances=3)
scheduler.start()
logger.report_scalar("loss", "train", iteration=0, value=100)
logger.report_scalar("loss", "test", iteration=0, value=200)
yes But i want two graphs with title as train loss and test loss and they should be under main category "loss"
like in the sidebar there should be a title called "loss" and under that two different plots should be there named as "train_loss" and "test_loss"
It will not create another 100 tasks, they will all use the main Task. Think of it as they "inherit" it from the main process. If the main process never created a task (i.e. no call to Tasl.init) then they will create their own tasks (i.e. each one will create its own task and you will end up with 100 tasks)
logger.report_scalar(title="loss", series="train", iteration=0, value=100)
logger.report_scalar(title="loss", series="test", iteration=0, value=200)
and it should log it into the same task and same project
my scheduler will be running every 60 seconds and calling main function
then if there are 100 experiments how it will create 100 tasks?
and that function creates Task and log them
logger.report_scalar("loss-train", "train", iteration=0, value=100)
logger.report_scalar("loss=test", "test", iteration=0, value=200)
notice that the title of the graph is its uniue id, so if you send scalars to with the same "title" they will show on the same graph
Can my request be made as new feature so that we can tag same type of graphs under one main tag
and under that there will be three graphs with title as train test and loss
Can my request be made as new feature so that we can tag same type of graphs under one main tag
Sure, open a Git Issue :)
main will initialize parent task and then my multiprocessing occurs which call combined function with parameters as project_name and exp_name