Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Unanswered
Hey, I Hope This Is The Right Place To Ask. We'Re A Small Data Science Team That Wants To Log Everything About Our Ml Models. Looking Around On The Internet, Mostly Mlflow Is Being Recommended, But Occasionally The Name Trains Pop-Ups. According To You,


DefeatedCrab47 For the most part, mlflow can serve basic ML models using scikit-learn. In contrast, Trains was designed with more general purpose ML/DL workflows in mind, for which there's no "generic" way to serve models as different scenarios can use different input encoding, models results would be represented in a variety of forms, etc.
Consider also, that creating an HTTP endpoint for model inference is quite a breeze: there are multiple examples of Flask on top of any DL/ML framework which you can add your work on top of.

If you're considering serving your model at (even a small) scale, my best recommendation would be to setup your serving code, test it on a single machine, then package it in a docker and have that docker deployed with k8s/Airflow AWS ElasticBeans etc.

Trains was built to support this same approach: just write your model serving code, import trains (you have a full API to get any model you need either by ID or with search capabilities) and it will download and cache it for you from wherever you actually store it (S3 GS etc.) Then the trains-agent can build a docker for you ready to be deployed.

The documentation is indeed somewhat lighter than would be ideal in some areas, especially for advanced stuff like model deployment. That's why we have additional communication channels :)

  
  
Posted 4 years ago
137 Views
0 Answers
4 years ago
one year ago