Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Unanswered
Hello Folks! I Don'T Know If This Issue Has Already Been Addressed. I Have A Basic Pipelinecontroller Script With Two Steps: One Of Task Is For Preprocessing Purposes And The Other For Training A Model. Currently I Am Placing The Code Related To The Pack


Hi Martin,

Actually Task.add_requirements behaves as I expect, since that part of the code is in the preprocessing script and for that task it does install all the specified packages. So, my question could be rephrased as the following: when working with PipelineController , is there any way to avoid creating a new development environment for each step of the pipeline?

According to the https://clear.ml/docs/latest/docs/clearml_agent provided in the official ClearML documentation, this seems to be the standard way in which the clearml-agent operates (create a new environment for each task). However, as all the tasks I have added in the pipeline could run in the same environment, I would like to be more efficient and re-use that environment once configured in the first task.

Since I am testing ClearML, I do not yet have a Git repository linked to the project. I am working on VS Code from a local device connected via SSH to a remote server. I spin up the agent on the remote machine and it is listening to the queue where the pipeline is placed.

  
  
Posted 3 years ago
196 Views
0 Answers
3 years ago
one year ago
Tags