Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Unanswered
Hi, Is This A Well Known Issue, That Running A Task In A Virtual Environment, Messes Up The Reproducibility Feature ?


Hi @<1546303254386708480:profile|DisgustedBear75> , what do you mean?

You get different results or your experiment fails?

Running in venv mode can be more prone to failure if you're running between different operating systems & python versions.

The default behavior of ClearML when running locally is to detect the packages used in the code execution (You can also provide specific packages manually or override auto detection entirely) and log them in the backend.

When a worker in a virtual environment mode picks up an experiment it will try to create a virtual environment using local python.

If some specific wheel versions aren't available to the python version the worker is running inside the virtual environment, the experiment will fail.

Is this your scenario?

  
  
Posted one year ago
91 Views
0 Answers
one year ago
one year ago