Reputation
Badges 1
85 × Eureka!so i was expecting that uploaded model will be for example LightGBM.1104445eca4749f89962669200481397/models/Model%20object/model.pkl
thanks AgitatedDove14 for the links.. seems like i might try first one if it works out .. before going the route to create a full framework support as in our case team uses multiple different frameworks
i don't need this right away.. i just wanted to know the possibility fo dividing the current machine into multiple workers... i guess if its not readily available then may be you guys can discuss to see if it makes sense to have it on roadmap..
i know its not magic... all linux subsystem underneath.. just to configure it in a way as needed 🙂 for now i think i will stick with current setup of cpu-only mode and co-ordinate with in the team. later one when need comes .. will see if we go for k8s or not
thanks for the update... it seems currently i can not pass the http/s proxy parameters as when agent creates a new env and try to download some package its being blocked by our corp firewall... all outgoing connection needs to pass through a proxy.. so is it possible to specify that or environment variables to agent
as when it run first time after .trains/venv-build cleaning, it output this message for this package - pycparser @ file:///home/conda/feedstock_root/build_artifacts/pycparser_1593275161868/work wondering why it specify this package as for most other packages it just prints the version number
i ran it this week
in the above example task id is from a newly generated task like Task.init() ?
AgitatedDove14 Morning... so what should the value of "upload_uri" to set to, fileserver_url e.g. http://localhost:8081 ?
once i removed the seaborn plot then CM plots becomes visible again
as i am seeing now my plots but they are lending into metrics section not plot section.
that seems like a bit of extra thing a user needs to bother about.. better deployment model should be that its part of api-server deployment and configurable from UI itself.. may be i am asking too much 😛
TimelyPenguin76 is there any way to do this using UI directly or as a schedule... otherwise i think i will run the cleanup_service as given in docs...
this is when executed from directly with task.init()
AgitatedDove14 it seems uploading artifact and uploading models are two different things when it comes to treating fileserver... as when i upload artifact it works as expected but when uploading model using outputmodel class, it wants output_uri path.. wondering how can i as it to store it under the fileserver like artifacts LightGBM.1104445eca4749f89962669200481397/artifacts/Model%20object/model.pkl
i am simply proxying it using ssh port forwarding
you replied it already.. it was execute_remotely called with exit_true argument
seems like CORS issue in the console logs
ok will give it a try and let you know
allegroai/trains image hash f038c8c6652d
there are multiple scripts under test/scripts folder.. example is running one script from that folder
thanks... i was just wondering if i overlooked any config option for that... as cpu_set might be possibility to for cpu
it may be that i am new to trains but in my normal notebook flow they both are images and i as trains user expected it to be under the Plot section as i think this is an image.. as in nutshell all matplotlib plots display data as an image 🙂
trains is run using docker-compose allegroai/trains-agent-services:latest and allegroai/trains:latest
not just fairness but the scheduled workloads will be starved of resources if say someone run training which by default take all the available cpus
simply changing to show doesn't work in my case as i am displaying CM.. what about if i use matshow
