Reputation
Badges 1
9 × Eureka!Okay, I was hoping it would be something easy like that! Now I need to figure out how to export that task id
You're not missing anything, I'm just super novice at trains 🙂 thank you
I think we'll have to play with it for a while to really solidify what it is we need--we're just starting to implement trains experiment tracking. We can report what happens 🙂
This is an entirely automated process, I'm just getting familiar with the APIs here, so one process needs to write the task id to a file and the next needs to read it--I am looking through the docs on how to obtain the task id
Yeah that's the model we have to run on, I was just kinda being hopeful
Thank you for your help so far, the responsiveness of this community has been a great feature of trains
There are a few things in my mind... Sorry if this is long. 🙃 I'm just running Snakemake in a docker container on my desktop to isolate dependencies. Inside the container they are run in the normal way with the snakemake
command. My snakemake jobs are a variety of python and shell scripts. Snakemake works by using files as intermediaries between jobs. I have a workflow with 19000 job nodes in it.
- I have some trains task code right now just in my model training jo...
Hmm I think so. It doesn't sound exactly compatible with Snakemake, more kind a replacement, though the pipelining trains does is quite different. Snakemake really is all about the DAG. I just tell it what output I want to get and it figures out what jobs to run to get that, does it massively parallel, and, very importantly, it NEVER repeats work it has already done and has artifacts for already (well, unless you force it to, but that's a conscious choice you have to make). This is super impo...
FYI the docs don't have everything, e.g. task.id
isn't documented as a property on the https://allegro.ai/docs/task.html page--you have to look in the source code and follow the inheritance to obtain that information