 
			Reputation
Badges 1
85 × Eureka!there are multiple scripts under  test/scripts  folder.. example is running one script from that folder
or is there any plan to fix it in upcoming release
i ran it this week
thanks... i was just wondering if i overlooked any config option for that... as  cpu_set might be possibility to for cpu
i know its not magic... all linux subsystem underneath.. just to configure it in a way as needed 🙂 for now i think i will stick with current setup of cpu-only mode and co-ordinate with in the team. later one when need comes .. will see if we go for k8s or not
this looks good... also do you have any info/eta on next controller/service release you mentioning
any logs i can check or debug my side
ok will give it a try and let you know
look forward to the new job workflow part in 0.16 then 🙂
trains-agent version as mentioned is 0.16.1 and server is 0.16.1 as well
is it because of something wrong with this package build from their owner or something else
as when it run first time after  .trains/venv-build  cleaning, it output this message for this package  - pycparser @ file:///home/conda/feedstock_root/build_artifacts/pycparser_1593275161868/work  wondering why it specify this package as for most other packages it just prints the version number
i understand.. its just if i have a docker image with correct env.. i would prefer if trains-agent can use that directly
not so sure.. ideally i was looking for some function calls which enables me to create a sort of DAG which get scheduled at given interval and DAG has status checks on up streams task ... so if upstream task fails.. downstream tasks are not run
whereas i am using simple matplotlib now
an example achieving what i propose would be greatly helpful
ok... is there any way to enforce using a given system wide env.. so agent doesn't need to spend time with env. creation
not just fairness but the scheduled workloads will be starved of resources if say someone run training which by default take all the available cpus
i think for now it should do the trick... was just thinking about the roadmap part
looking at the above link, it seems i might be able to create it with some boilerplate as it has concept of parent and child... but not sure how status checks and dependency get sorted out
in the above example task id is from a newly generated task like  Task.init()   ?
you replied it already.. it was  execute_remotely  called with  exit_true argument
TimelyPenguin76 is there any way to do this using UI directly or as a schedule... otherwise i think i will run the cleanup_service as given in docs...
so as you say.. i don't think the issue i am seeing is due to this error
that seems like a bit of extra thing a user needs to bother about.. better deployment model should be that its part of api-server deployment and configurable from UI itself.. may be i am asking too much 😛
yes delete experiments which are old or for some other reason are not required to keep around
thanks for letting me know.. but it turns out after i have recreated my whole system environment from scratch, trains agent is working as expected..
as if its couple of weeks away.. i can wait
my use case is more like 1st one where run the training at a certain given schedule