Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
YummyWhale40
Moderator
6 Questions, 6 Answers
  Active since 10 January 2023
  Last activity 10 months ago

Reputation

0

Badges 1

6 × Eureka!
0 Votes
0 Answers
523 Views
0 Votes 0 Answers 523 Views
I'm facing a problem that I can't see the scalar logs with the message "Failed to get Scalar Chart". The following image is a result of examples/manual_repor...
3 years ago
0 Votes
0 Answers
486 Views
0 Votes 0 Answers 486 Views
PR for PyTorch Lightning integration is welcomed now. https://github.com/PyTorchLightning/pytorch-lightning/issues/929
3 years ago
0 Votes
9 Answers
433 Views
0 Votes 9 Answers 433 Views
pytorch-lightning-bols.loggers.TrainsLogger creates new ids even if reuse_last_task_id=True is set. How can I force it to reuse last ids?
3 years ago
0 Votes
0 Answers
488 Views
0 Votes 0 Answers 488 Views
I made PR for PyTorch Lightning integration. https://github.com/bmartinn/pytorch-lightning/pull/1
3 years ago
0 Votes
0 Answers
499 Views
0 Votes 0 Answers 499 Views
AgitatedDove14
3 years ago
0 Votes
0 Answers
583 Views
0 Votes 0 Answers 583 Views
AgitatedDove14 It was caused by AdBlocker, sorry šŸ˜…
3 years ago
0 Pytorch-Lightning-Bols.Loggers.Trainslogger

maybe the arguments is simply passed to Task.init()
self._trains = Task.init( project_name=project_name, task_name=task_name, task_type=task_type, reuse_last_task_id=reuse_last_task_id, output_uri=output_uri, auto_connect_arg_parser=auto_connect_arg_parser, auto_connect_frameworks=auto_connect_frameworks, auto_resource_monitoring=auto_resource_monitoring )

3 years ago
0 Pytorch-Lightning-Bols.Loggers.Trainslogger

In my case, I write codes and run single batch train-val, which contains model saving, in developing phase. I want TRAINS to overwrite the dev runs for keeping dashboard clean.

3 years ago
0 Pytorch-Lightning-Bols.Loggers.Trainslogger

I would likeĀ to confirmĀ just inĀ case.
In the desired behavior, reuse_last_task_id=True forces it for any intervals?

3 years ago
0 Pytorch-Lightning-Bols.Loggers.Trainslogger

if you have any idea to reuse id even if models are outputted, please tell me thx

3 years ago
0 Pytorch-Lightning-Bols.Loggers.Trainslogger

I don't mean continuous training but I want to know about your plans for it šŸ˜‹

3 years ago
0 Pytorch-Lightning-Bols.Loggers.Trainslogger

oh I got it. my codes output models and the task catch it automatically.

3 years ago