I don't mean continuous training but I want to know about your plans for it š
I would likeĀ to confirmĀ just inĀ case.
In the desired behavior, reuse_last_task_id=True
forces it for any intervals?
YummyWhale40 from the code snippet, it seems like the argument is passed.
"reuse_last_task_id=True" is the default, and it means that if the previous run of the task did not create any artifacts/models and was executed 72 hours ago (configurable), The Task will be reset (i.e. all logs cleared) and will be reused in the current run.
oh I got it. my codes output models and the task catch it automatically.
if you have any idea to reuse id even if models are outputted, please tell me thx
maybe the arguments is simply passed to Task.init()
self._trains = Task.init( project_name=project_name, task_name=task_name, task_type=task_type, reuse_last_task_id=reuse_last_task_id, output_uri=output_uri, auto_connect_arg_parser=auto_connect_arg_parser, auto_connect_frameworks=auto_connect_frameworks, auto_resource_monitoring=auto_resource_monitoring )
In my case, I write codes and run single batch train-val, which contains model saving, in developing phase. I want TRAINS to overwrite the dev runs for keeping dashboard clean.
YummyWhale40 no idea what the pytorch-lighting guys did there. let me check a the actual code.
YummyWhale40 you mean like continue training?
https://github.com/allegroai/trains/issues/160