then if there are 10 experiments then I have to call Task.create() for those 10 experiments
def combined(path,exp_name,project_name):temp = Task.create(task_name="exp_name")logger = temp.current_logger()logger.report_scalar()
def main():task=Task.init(project_name="test")[pool.apply_async(combined, args = (row['Path'], row['exp_name'], row['project_name'])) for index,row in temp_df.iterrows()]
scheduler = BlockingScheduler()scheduler.add_job(main, 'interval', seconds=60, max_instances=3)scheduler.start()
each subprocess logs one experiment as task
So you want these two on two different graphs ?
logger.report_scalar("loss-train", "train", iteration=0, value=100)logger.report_scalar("loss=test", "test", iteration=0, value=200)
notice that the title of the graph is its uniue id, so if you send scalars to with the same "title" they will show on the same graph
now after 1st iteration is completed then after 5 minutes my script runs automatically and then again it logs into trains server
No. since you are using Pool. there is no need to call task init again. Just call it once before you create the Pool, then when you want to use it, just do task = Task.current_task()
This code will give you one graph titled "loss" with two series: (1) trains (2) loss
but what is happening is it is creating new task under same project with same task name
See on line 212 I am calling one function "combined" with some arguments
so, if I call Task.init() before that line there is no need of calling Task.init() on line number 92
This code gives me the graph that I displayed above
I will share my script u can see it what I am doing
logger.report_scalar(title="loss", series="train", iteration=0, value=100)logger.report_scalar(title="loss", series="test", iteration=0, value=200)
Hi @<1523701205467926528:profile|AgitatedDove14> , I wanted to ask you something. Is it possible that we can talk over voice somewhere so that I can explain my problem better?
but this gives the results in the same graph
Create one experiment (I guess in the scheduler)
task = Task.init('test', 'one big experiment')
Then make sure the the scheduler creates the "main" process as subprocess, basically the default behavior)
Then the sub process can call Task.init and it will get the scheduler Task (i.e. it will not create a new task). Just make sure they all call Task init with the same task name and the same project name.
and that function creates Task and log them
then my combined function create a sub task using Task.create(task_name=exp_name)
Just call the Task.init before you create the subprocess, that's it 🙂 they will all automatically log to the same Task. You can also call the Task.init again from within the subprocess task, it will not create a new experiment but use the main process experiment.
main will initialize parent task and then my multiprocessing occurs which call combined function with parameters as project_name and exp_name
so what I have done is rather than reading sequentially I am reading those experiments through multiprocessing and for each experiment I am creating new task with specified project_name and task_name

