Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
I Have A Second Question As Well, Is It Possible To Disable Any Parts Of The Automagical Logging? In My Project I Use Both Config And Argparse. It Works By Giving Path To A Config File As A Console Argument And Then Allow The User To Adjust Values With Mo

I have a second question as well, is it possible to disable any parts of the automagical logging?
In my project I use both config and argparse. It works by giving path to a config file as a console argument and then allow the user to adjust values with more arguments. For example: python train.py --config config.json --learning_rate 0.0002 , so the base or default values are in config.json, but they can be overwritten when loaded. In this case, I would actually like to disable automagical logging for argparse and only explicitly log a config_dict because that is where all the final hyperparameters are. With my current setup, I can explicitly log the config, but on the trains web page I also get a bunch of unused entries for all the argpase arguments I didn't use.

  
  
Posted 3 years ago
Votes Newest

Answers 12


Hi UnsightlyShark53 , just a quick FYI, you can also log the entire config file config.json this will be stored as model configuration, and you can see it in the input/output models under the artifacts tab.
See example here you can path either the path to the configuration file, or the dictionary itself after you loaded the json, whatever is more convenient :)

  
  
Posted 3 years ago

Hi UnsightlyShark53 ,

You can disable the auto argparse by changing the value of auto_connect_arg_parser in the Task.init function to False .

You can connect / disconnect other parts too if you like:
https://github.com/allegroai/trains/blob/master/trains/task.py#L166

Can this do the trick for you?

  
  
Posted 3 years ago

AgitatedDove14 Sorry for my late response I didn't have this slack added on my laptop!
I can't use actual code from my project as it is work related, but this should reproduce the problem:` import argparse
from trains import Task

def main(disable_trains):
disable_trains = disable_trains.lower()
assert disable_trains in ("n", "y", "no", "yes", "true", "false"),
"Invalid input to --disable_trains"
if disable_trains in ("n", "no", "false"):
task = Task.init(
project_name="My project",
task_name="Some experiment",
output_uri="saved_models",
auto_connect_arg_parser=False
)

if name == "main":
# pylint: disable = invalid-name
parser = argparse.ArgumentParser(description="My Project")
parser.add_argument("-dt", "--disable_trains", default="no", type=str,
help="Whether to disable TRAINS (default: no)")
args = parser.parse_args()
main(args.disable_trains) I got around it doing what I said earlier, by putting the import inside the if: import argparse

def main(disable_trains):
disable_trains = disable_trains.lower()
assert disable_trains in ("n", "y", "no", "yes", "true", "false"),
"Invalid input to --disable_trains"
if disable_trains in ("n", "no", "false"):
from trains import Task
task = Task.init(
project_name="My project",
task_name="Some experiment",
output_uri="saved_models",
auto_connect_arg_parser=False
)

if name == "main":
# pylint: disable = invalid-name
parser = argparse.ArgumentParser(description="My Project")
parser.add_argument("-dt", "--disable_trains", default="no", type=str,
help="Whether to disable TRAINS (default: no)")
args = parser.parse_args()
main(args.disable_trains) I will once again repeat, I do not have full insight so maybe this is a difficult problem, but this is my naive suggestion:I do not see why this error: trains.errors.UsageError: ArgumentParser.parse_args() was automatically connected to this task, although auto_connect_arg_parser is turned off!
When turning off auto_connect_arg_parser, call Task.init(...) before calling ArgumentParser.parse_args() Need to occur. Why not connect to argparse in Task.init? If that is not an option, how about: Connect to argparse like you do now, but do nothing about it until Task.init is called. If Task.init has auto_connect_arg_parser=False ` , then discard the information from the argparse (or keep it if that is needed) and disconnect from the argparse. Also, don't push anything to the TRAINS server until you have confirmed in Task.init that the user indeed wants the argparse input.
Valid use case:A setup where you use argparse to select which config you would like to use, in which the actual hyper parameters you would like to log ONLY exist in the config. You do not want to log any of the additional argparse options as they are about picking config or overwriting options in config (leading to duplicate entries on the TRAINS server if argparse is logged).
Is it a niche case:I would not say so, this popular https://github.com/victoresque/pytorch-template uses that exact setup.

  
  
Posted 3 years ago

Hi UnsightlyShark53 I think you are absolutely right, there is no reason for the trains.errors.UsageError: ArgumentParser.parse_args() ... Error.
As you mentioned, if auto_connect_arg_parser=False is False, it should just ignore what it picked automatically.
I will make sure the error is resolved I will also make sure, you will still be able to connect the argparse manually with task.connect(parser) after the Task has been created. Thanks for the reference! I took a look over here https://github.com/victoresque/pytorch-template/blob/master/train.py and as I can tell the arg parser is used to pick a configuration file. Are you connecting the configuration to the Task? like with Task.connect_configuration('path_to_configuration') ?Regardless I'll upload here a fix for you to test, if that okay with you ?!

  
  
Posted 3 years ago

Thanks for the tip, but I don't think that will work in my case because the config file may not match the config used by the program, because hyperparameters can also be changed by argparse. Those changes will not be written to the config file. But using the config_dict will work because that is where the changes are applied :)

  
  
Posted 3 years ago

Hi UnsightlyShark53 apologies for this delayed reply, slack doesn't alert users unless you add @ , so things sometimes get lost :(
I think you pointed at the correct culprit...
Did you manage to overcome the circular include?
BTW , how could I reproduce it? It will be nice if we could solve it

  
  
Posted 3 years ago

AgitatedDove14 Yes, I actually upload the config as an artifact because it is easier to read when it is not flattened 🙂
Thank you, I would appreciate a fix!

  
  
Posted 3 years ago

In my config I can specify whether use TRAINS or not and this config is loaded with argparse. And as this error message says:
raise UsageError("ArgumentParser.parse_args() was automatically connected to this task, " trains.errors.UsageError: ArgumentParser.parse_args() was automatically connected to this task, although auto_connect_arg_parser is turned off! When turning off auto_connect_arg_parser, call Task.init(...) before calling ArgumentParser.parse_args()I have a circular problem here. I assume it is because Trains connects to argparse in the import?

I see I can come around it by importing Task after using argparse. I don't have full insight into the design of trains, so this suggestion may be unreasonable, but maybe actions such as connecting to argparse should happen when Task.init is called?

  
  
Posted 3 years ago

UnsightlyShark53 See if this one solves the problem :)
BTW: the reasoning for the message is that when running the task with "trains-agent" if the parsing of the argparser happens before the the Task is initialized, the patching code doesn't know if it supposed to override the values. But this scenario was fixed a long time ago, and I think the error was mistakenly left behind...

  
  
Posted 3 years ago

Fantastic! I will give that a try and let you know 🙂

  
  
Posted 3 years ago

UnsightlyShark53 Awesome, the RC is still not available on pip, but we should have it in a few days.
I'll keep you posted here :)

  
  
Posted 3 years ago

AgitatedDove14 I see! This seems to work! Thanks!

  
  
Posted 3 years ago
651 Views
12 Answers
3 years ago
one year ago
Tags