There are a few things in my mind... Sorry if this is long. 🙃 I'm just running Snakemake in a docker container on my desktop to isolate dependencies. Inside the container they are run in the normal way with the snakemake
command. My snakemake jobs are a variety of python and shell scripts. Snakemake works by using files as intermediaries between jobs. I have a workflow with 19000 job nodes in it.
-
I have some trains task code right now just in my model training jobs, and that works great, although I am looking at how to amend jobs with further analysis artifacts, since I have a few branching jobs in my workflow that analyze the model outside of the training job.
-
Snakemake has a kubernetes integration, so I am not sure if that conflicts with trains should I try to clone a trains experiment and run it from the trains UI with some new arguments.
-
It looks like trains does automl, but my Snakemake job basically assumes any automl is going to originate from within a single snake job. I wonder how that would work with trains if I have a snake job that exposes hyperparameters as input arguments and doesn't do automl internally (this could be desirable), but I have the issue of how does trains properly invoke a snakemake job?
-
I'm sure I'll come up with more... 😉