so as you say.. i don't think the issue i am seeing is due to this error
whereas i am using simple matplotlib now
i don't need this right away.. i just wanted to know the possibility fo dividing the current machine into multiple workers... i guess if its not readily available then may be you guys can discuss to see if it makes sense to have it on roadmap..
looking at the above link, it seems i might be able to create it with some boilerplate as it has concept of parent and child... but not sure how status checks and dependency get sorted out
it may be that i am new to trains but in my normal notebook flow they both are images and i as trains user expected it to be under the Plot section as i think this is an image.. as in nutshell all matplotlib plots display data as an image 🙂
TimelyPenguin76 yeah when i run matplotlib with show plots does land under Plot section... so its mainly then the imshow part.. i am wondering why the distinction and what is the usual way to emit plots to debug samples
thanks... i was just wondering if i overlooked any config option for that... as cpu_set might be possibility to for cpu
in the above example task id is from a newly generated task like Task.init() ?
not so sure.. ideally i was looking for some function calls which enables me to create a sort of DAG which get scheduled at given interval and DAG has status checks on up streams task ... so if upstream task fails.. downstream tasks are not run
an example achieving what i propose would be greatly helpful
any logs i can check or debug my side
yeah i still see it.. but that seems to be due to dns address being blocked by our datacenter
as if its couple of weeks away.. i can wait
AgitatedDove14 it seems uploading artifact and uploading models are two different things when it comes to treating fileserver... as when i upload artifact it works as expected but when uploading model using outputmodel class, it wants output_uri path.. wondering how can i as it to store it under the fileserver like artifacts LightGBM.1104445eca4749f89962669200481397/artifacts/Model%20object/model.pkl
test package is not installed but its in the current working directory
ok will give it a try and let you know
i can not check the working directory today due to vpn issues in accessing server but script path was -m test.scripts it was missing script from it
seems like port forwarding had an issue.. fixed that.. now running test again to see if things workout as expected
AgitatedDove14 sorry having issues on my side to connect to server to test it.. but directory structure when i execute the command is like thisDirectory layout: ~/test/scripts/script.py ~$ python -m test.scripts.script --args
trains-agent version as mentioned is 0.16.1 and server is 0.16.1 as well
this looks good... also do you have any info/eta on next controller/service release you mentioning