Reputation
Badges 1
69 × Eureka!no i want all of them in the same experiment
and under that there will be three graphs with title as train test and loss
what changes should I make here?
This code gives me the graph that I displayed above
Hi @<1523701205467926528:profile|AgitatedDove14> , I wanted to ask you something. Is it possible that we can talk over voice somewhere so that I can explain my problem better?
I will share my script u can see it what I am doing
each subprocess logs one experiment as task
I have to create a main task for example named as main
okay, Thanks @<1523701205467926528:profile|AgitatedDove14> for the help.
like in the sidebar there should be a title called "loss" and under that two different plots should be there named as "train_loss" and "test_loss"
so what I have done is rather than reading sequentially I am reading those experiments through multiprocessing and for each experiment I am creating new task with specified project_name and task_name
its like main title will be loss
can we delete the models and then upload it again?
but this gives the results in the same graph
then if there are 10 experiments then I have to call Task.create() for those 10 experiments
so, like if validation loss appears then there will be three sub-tags under one main tag loss
so, if I call Task.init() before that line there is no need of calling Task.init() on line number 92
@<1523701205467926528:profile|AgitatedDove14> I want to log directly to trains using logger.report_scalar
now after 1st iteration is completed then after 5 minutes my script runs automatically and then again it logs into trains server
i mean all 100 experiments in one project
def combined(path,exp_name,project_name):
temp = Task.create(task_name="exp_name")
logger = temp.current_logger()
logger.report_scalar()
def main():
task=Task.init(project_name="test")
[pool.apply_async(combined, args = (row['Path'], row['exp_name'], row['project_name'])) for index,row in temp_df.iterrows()]
scheduler = BlockingScheduler()
scheduler.add_job(main, 'interval', seconds=60, max_instances=3)
scheduler.start()
and it should log it into the same task and same project
First I set no_proxy to 127.0.0.1
Then I did set CURL_CA_BUNDLE to empty string