FrothyDog40 Thank you for your reply. I agree that MLflow's serving solution is not going to be of much help for real deployment. However, to me the advantage of quickly setting-up an API access point with just 1 line of code helps with some internal trying out. To colleague: "Hey, this new model seems to do good, want to give it a try?".
I've setup my own Docker container with Sanic (like Flask) and indeed it's not too difficult. However, you'll still hit issues like " https://stackoverflow.com/questions/10636611/how-does-access-control-allow-origin-header-work " that throws a network security error if not properly configured.
And even turning 1 model into an API still won't do it automatically for any model. So you would have to spend time to write serving code that would do that, costing time as well.
mlflow can serve basic ML models using scikit-learn. In contrast, Trains was designed with more general purpose ML/DL workflows in mind
The GitHub README only seems to indicate scikit-learn indeed, but their https://mlflow.org/docs/latest/models.html#deploy-mlflow-models seems to indicate all supported models.
MLflow supports ( https://mlflow.org/docs/latest/models.html#built-in-model-flavors ) models from: Python Function (python_function), R Function (crate), H2O (h2o), Keras (keras), MLeap (mleap), PyTorch (pytorch), Scikit-learn (sklearn), Spark MLlib (spark), TensorFlow (tensorflow), ONNX (onnx), MXNet Gluon (gluon), XGBoost (xgboost), LightGBM (lightgbm)
Those are all frameworks I know about and more, so what would be more general than supporting these?
Since it's possible to deploy a model stored with TRAINS, it's not a limitation, but it definitely has it's advantages if TRAINS would support it (early stage Data Science infrastructure).
Please don't get me wrong. TRAINS seems amazing to me so far! but I have to convince my other colleagues.