Are you using tensorboard or do you want to log directly to trains ?
but what is happening is it is creating new task under same project with same task name
def combined(path,exp_name,project_name):temp = Task.create(task_name="exp_name")logger = temp.current_logger()logger.report_scalar()
def main():task=Task.init(project_name="test")[pool.apply_async(combined, args = (row['Path'], row['exp_name'], row['project_name'])) for index,row in temp_df.iterrows()]
scheduler = BlockingScheduler()scheduler.add_job(main, 'interval', seconds=60, max_instances=3)scheduler.start()
No. since you are using Pool. there is no need to call task init again. Just call it once before you create the Pool, then when you want to use it, just do task = Task.current_task()
This code will give you one graph titled "loss" with two series: (1) trains (2) loss
logger.report_scalar("loss-train", "train", iteration=0, value=100)logger.report_scalar("loss=test", "test", iteration=0, value=200)
notice that the title of the graph is its uniue id, so if you send scalars to with the same "title" they will show on the same graph
