Reputation
Badges 1
85 × Eureka!whereas i am using simple matplotlib now
my use case is more like 1st one where run the training at a certain given schedule
ah ok.. anyway to avoid it or change it on my side
or is there any plan to fix it in upcoming release
ok will give it a try and let you know
as i am seeing now my plots but they are lending into metrics section not plot section.
so as you say.. i don't think the issue i am seeing is due to this error
yeah i still see it.. but that seems to be due to dns address being blocked by our datacenter
you replied it already.. it was execute_remotely
called with exit_true
argument
AgitatedDove14 no it doesn't work
i think for now it should do the trick... was just thinking about the roadmap part
TimelyPenguin76 is there any way to do this using UI directly or as a schedule... otherwise i think i will run the cleanup_service as given in docs...
this looks good... also do you have any info/eta on next controller/service release you mentioning
looking at the above link, it seems i might be able to create it with some boilerplate as it has concept of parent and child... but not sure how status checks and dependency get sorted out
AgitatedDove14 sorry having issues on my side to connect to server to test it.. but directory structure when i execute the command is like thisDirectory layout: ~/test/scripts/script.py ~$ python -m test.scripts.script --args
test package is not installed but its in the current working directory
i know its not magic... all linux subsystem underneath.. just to configure it in a way as needed 🙂 for now i think i will stick with current setup of cpu-only mode and co-ordinate with in the team. later one when need comes .. will see if we go for k8s or not
thanks AgitatedDove14 for the links.. seems like i might try first one if it works out .. before going the route to create a full framework support as in our case team uses multiple different frameworks
ok so controller task is a simple place holder which run infinitely and fetch a task template and queue it..
an example achieving what i propose would be greatly helpful
not so sure.. ideally i was looking for some function calls which enables me to create a sort of DAG which get scheduled at given interval and DAG has status checks on up streams task ... so if upstream task fails.. downstream tasks are not run
ok... is there any way to enforce using a given system wide env.. so agent doesn't need to spend time with env. creation
thanks Martin.. at least something to go with.. as if i have any issue then i know which component logs to look for
that seems like a bit of extra thing a user needs to bother about.. better deployment model should be that its part of api-server deployment and configurable from UI itself.. may be i am asking too much 😛
in the above example task id is from a newly generated task like Task.init()
?
seems like CORS issue in the console logs
TimelyPenguin76 yeah when i run matplotlib with show
plots does land under Plot
section... so its mainly then the imshow
part.. i am wondering why the distinction and what is the usual way to emit plots to debug samples
as when it run first time after .trains/venv-build
cleaning, it output this message for this package - pycparser @ file:///home/conda/feedstock_root/build_artifacts/pycparser_1593275161868/work
wondering why it specify this package as for most other packages it just prints the version number