Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Unanswered
Hi, I Run The Trains Server In An Docker Container And Started Making Use Of Tasks ... My Tests Are Showed On The Projects Dashboard Which Is Realy Cool. What I Haven'T Found So Far Is A Way To Clean Up The System From The Tests I Did. I'M Able To Archive


I ran an local (not dockerized) trains-agent
trains-agent daemon --queue training --create-queue --foregroundwhich enabled me to see the GPU load on the corresponding view 🙂

Now I got another issue.
It seems when cloning an experiment, a virtual environment is been created with all the modules been identified to be used. Inside this environment the experiment is running.
Am I right?
Is this the case only for clones?

In my Python code I'm trying to read a pandas table which I stored in parque format. Unfortunately when running the clone (with changed parameter) I get an exception caused by a missing package

` raise ImportError(
ImportError: Unable to find a usable engine; tried using: 'pyarrow', 'fastparquet'.
A suitable version of pyarrow or fastparquet is required for parquet support.
Trying to import the above resulted in these errors:

  • Missing optional dependency 'pyarrow'. pyarrow is required for parquet support. Use pip or conda to install pyarrow.
  • Missing optional dependency 'fastparquet'. fastparquet is required for parquet support. Use pip or conda to install fastparquet. This I also had on my development system when I started using the https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#parquet format. Pandas needs a backend to be installed being able to handle parquet format. What I'm using locally is https://fastparquet.readthedocs.io/en/latest/install.html is loaded on demand by pandas. So I havent added an import fastparquet `explicitly in the code (I will do this soon to see if it resolves the exception).
    But I wonder about the exception raising only on cloned experiments.
    While writing this I think I understand it now. Running a script locally uses whatever has been installed locally and by instantiating a task the streams are redirected, configurations are analyzed and stored, ...
    When cloning experiments, they are been re-constructed out of this information and are running in an isolated environment. If needed packages have not been identified as such, they are missing ...

Well, realy cool stuff this Trains product 👍
Looking forward to dive deaper to it

  
  
Posted 4 years ago
164 Views
0 Answers
4 years ago
one year ago