I couldn't tell. Assume, I have a huge github repository, it has 100 ml project and I want to see all of them in the trains server. Should I write "train.init()" and run them all in order to see them in the server or is there any other way to see all of them in the server without run them all.
Hi MysteriousBee56 ,
what do you mean by:
Can we upload our project repository to trains server?
MysteriousBee56 would providing Trains with an "import mode" (say, via environment or command line variable), which means that it should create a draft server entry, populate all the execution/environment info and exit before it actually starts employing the ML infrastructure address your use case?
AgitatedDove14 , sorry for my late response, I will try it. it might work and Thanks.
As you said, in order to see my code in the server, I need to run it. So, what am I asking that is there any way to see my code in the server without run it.
MysteriousBee56 I would do Task.create()
you can get the full Task internal representation with task.data
Then call task._edit(script={'repo': ...}) to edit/update all the Task entries.
You can check the dull details of the task object here: https://github.com/allegroai/trains/blob/master/trains/backend_api/services/v2_8/tasks.py#L954
BTW: when you have a sample script working, consider PR-ing it, I'm sure it will be useful for others 🙂 (also a great way to get us involved with debugging it 😉 )
I mean, you know in trains github, there are examples and when I deploy the server, these examples are exist in server with draft status. So, I want to add my examples in the same way.
MysteriousBee56 when you execute your code once it will appear in the server (with all fields pre-populated based on your setup/git etc.) once it is there you can "clone" them and move them around.
Is this what you mean?
A bit of background, the idea behind Trains is that the environment definition (i.e,. git repo packages etc, code entry arguments etc.) is collected when executing the code. This avoids the tedious task of generating and maintaining YAML/Json configuration files.
What is exactly the use case for the pre-populate ?
MysteriousBee56 I see...
So yes, you can with the APIClient you have full RESTful access to the backend.
I think there was a similar discussion https://allegroai-trains.slack.com/archives/CTK20V944/p1593524144116300
HandsomeCrow5 how did you end up solving it? I think you had a similar use case?!
AgitatedDove14 My use-case was a bit different. I only populate the repository url, entrypoint, and commit SHA in the Script
object.
We wanted to have something informative enough in our task on one hand, but not to load with redundant data on the other - so a commit SHA made perfect sense.
I ended up using from trains.backend_api.services import tasks
and then initialize with tasks.Script(…)