Reputation
Badges 1
85 × Eureka!an example achieving what i propose would be greatly helpful
as if its couple of weeks away.. i can wait
this looks good... also do you have any info/eta on next controller/service release you mentioning
in the above example task id is from a newly generated task like Task.init()
?
it still tries to create a new env
i know it support conda.. but i have another system wide env which is not base .. say ml
so wondering if i can comnfigure trains-agent to use that... not standard practice but just asking if it is possible
ok will report back
you replied it already.. it was execute_remotely
called with exit_true
argument
ok so controller task is a simple place holder which run infinitely and fetch a task template and queue it..
any example in the repo which i can go through
AgitatedDove14 no it doesn't work
trains-agent version as mentioned is 0.16.1 and server is 0.16.1 as well
yes delete experiments which are old or for some other reason are not required to keep around
i think for now it should do the trick... was just thinking about the roadmap part
test package is not installed but its in the current working directory
there are multiple scripts under test/scripts
folder.. example is running one script from that folder
the use case i have is to allow people from my team to run their workloads on set of servers without stepping over each other..
that seems like a bit of extra thing a user needs to bother about.. better deployment model should be that its part of api-server deployment and configurable from UI itself.. may be i am asking too much 😛
also one thing i noticed.. when i report confusion matrix and some other plots e.g. seaborn with matplotlib.. on server side i can the plots are there but not visible at all
AgitatedDove14 sorry having issues on my side to connect to server to test it.. but directory structure when i execute the command is like thisDirectory layout: ~/test/scripts/script.py ~$ python -m test.scripts.script --args
looking at the above link, it seems i might be able to create it with some boilerplate as it has concept of parent and child... but not sure how status checks and dependency get sorted out
as when it run first time after .trains/venv-build
cleaning, it output this message for this package - pycparser @ file:///home/conda/feedstock_root/build_artifacts/pycparser_1593275161868/work
wondering why it specify this package as for most other packages it just prints the version number
yeah that would solve it i think.. so what is the normal cadence for release.. every month or quarter ?
once i removed the seaborn plot then CM plots becomes visible again
thanks for the update... it seems currently i can not pass the http/s proxy parameters as when agent creates a new env and try to download some package its being blocked by our corp firewall... all outgoing connection needs to pass through a proxy.. so is it possible to specify that or environment variables to agent
while you guys gonna work on it.. just a small feature addition to it.. it would be cool to have a DAG figure which shows how models are linked under this task and ability to just click a circle in that DAG figure to navigate to given task... i think it will be very useful UX 🙂
as i am seeing now my plots but they are lending into metrics section not plot section.
i understand.. its just if i have a docker image with correct env.. i would prefer if trains-agent can use that directly