Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi. One Question Regarding Instantiation Of Tasks. The Docu States That Providing

Hi. One question regarding instantiation of tasks. The docu states that providing reuse_last_task_id=False in Task.init will always lead to creation of a new task. However, this seems not to be the case. Here an example:
from trains import Task task1 = Task.init(project_name="myproject", task_name="task1") task2 = Task.init(project_name="myproject", task_name="task2", reuse_last_task_id=False)The last line leads to an error
Current task already created and requested task name 'task2' does not match current task name 'task1'

Is this the intended behaviour? I know I could use Task.create or close the task manually before, but it seems like you intended it to just work

  
  
Posted 4 years ago
Votes Newest

Answers 10


Thanks a lot, I'll have a look 🙂

  
  
Posted 4 years ago

Independently of my use case: the docstring of Task.init states a new task will always be created if
- the ``reuse_last_task_id`` parameter is assigned ``False``.

  
  
Posted 4 years ago

My grid search is essentially just a loop. You can think of my use case as:
for results_params_dict in results: experiment = Task.init("grid_search", "my_experiment") experiment.connect(results_params_dict)

  
  
Posted 4 years ago

If this is the case (you have results and you want a new task to connect to each result), you can just clone the base task, update task parameters and enqueue it for execution (similar to https://github.com/allegroai/trains/blob/master/examples/automation/manual_random_param_search_example.py example), can this do the trick?

  
  
Posted 4 years ago

Hi ThankfulOwl72 checkout TrainsJob object. It should essentially do what you need:
https://github.com/allegroai/trains/blob/master/trains/automation/job.py#L14

  
  
Posted 4 years ago

I cannot reuse the running task in the grid search since it would overwrite the hyperparameters instead of creating a new experiment with the new hyperparameters

  
  
Posted 4 years ago

I guess I could use Task.create for that but it has the downside of "not being reproducible"

  
  
Posted 4 years ago

Hi ThankfulOwl72 ,

You can create only one main execution Task. In the code you wrote, you are trying to have two tasks, which causing the exception. You can read more about the task object in the https://allegro.ai/docs/task.html#trains.task.Task .

The reuse_last_task_id will create a new task, which is not the default for https://allegro.ai/docs/task.html#task.Task.init (will override the last one)

What is your use case? maybe I can help with that

BTW, you can use Task.init() without parameters to receive the running task (for the second call).

  
  
Posted 4 years ago

Do you inherit from SearchStrategy in you implementation (you can read about it https://allegro.ai/docs/automation_optimization_searchstrategy.html#automation-optimization-searchstrategy )? If not, can you share how?

About the docstring, thanks 🙂 we will update it with the exceptions.

  
  
Posted 4 years ago

Thanks for the prompt answer! My use case is the following:

I am running a grid search (my own implementation, not the trains grid search) and would like the experiments to be grouped together. I wanted to achieve that by having the same project and putting each step of the grid search as experiment.

  
  
Posted 4 years ago
872 Views
10 Answers
4 years ago
one year ago
Tags