It will not create another 100 tasks, they will all use the main Task. Think of it as they "inherit" it from the main process. If the main process never created a task (i.e. no call to Tasl.init) then they will create their own tasks (i.e. each one will create its own task and you will end up with 100 tasks)
then if there are 100 experiments how it will create 100 tasks?
Just so I understand,
scheduler executes main every 60sec
main spins X sub-processes
Each subprocess needs to report scalars ?
each subprocess logs one experiment as task
And you want all of them to log into the same experiment ? or do you want an experiment per 60sec (i.e. like the scheduler)
i mean all 100 experiments in one project
like if u see in above image my project name is abcd18 and under that there are experiments Experiment1, Experiment2 etc.
This code will give you one graph titled "loss" with two series: (1) trains (2) loss
This code gives me the graph that I displayed above
@<1523720500038078464:profile|MotionlessSeagull22> you cannot have two graphs with the same title, the left side panel presents graph titles. That means that you cannot have a title=loss series=train & title=loss series=test on two diff graphs, they will always be displayed on the same graph.
That said, when comparing experiments, all graph pairs (i.e. title+series) will be displayed as a single graph, where the diff series are the experiments.
yes But i want two graphs with title as train loss and test loss and they should be under main category "loss"
Can my request be made as new feature so that we can tag same type of graphs under one main tag
so , it will create a task when i will run it first time
def combined(path,exp_name,project_name):
temp = Task.create(task_name="exp_name")
logger = temp.current_logger()
logger.report_scalar()
def main():
task=Task.init(project_name="test")
[pool.apply_async(combined, args = (row['Path'], row['exp_name'], row['project_name'])) for index,row in temp_df.iterrows()]
scheduler = BlockingScheduler()
scheduler.add_job(main, 'interval', seconds=60, max_instances=3)
scheduler.start()
main will initialize parent task and then my multiprocessing occurs which call combined function with parameters as project_name and exp_name
then my combined function create a sub task using Task.create(task_name=exp_name)
my scheduler will be running every 60 seconds and calling main function
logger.report_scalar(title="loss", series="train", iteration=0, value=100)
logger.report_scalar(title="loss", series="test", iteration=0, value=200)
Can my request be made as new feature so that we can tag same type of graphs under one main tag
Sure, open a Git Issue :)
okay, Thanks @<1523701205467926528:profile|AgitatedDove14> for the help.
so what I have done is rather than reading sequentially I am reading those experiments through multiprocessing and for each experiment I am creating new task with specified project_name and task_name