its not that they are blank.. whole page is blank including plotly plots
whereas i am using simple matplotlib now
as if its couple of weeks away.. i can wait
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at
http://localhost:8081/Trains%20Test/LightGBM.56ca0c9c9ebf4800b7e4f537295d942c/metrics/LightGBM%20Feature%20Importance%20above%200.0%20threshold/plot%20image/LightGBM%20Feature%20Importance%20above%200.0%20threshold_plot%20image_00000000.png . (Reason: CORS request did not succeed).
ok... is there any way to enforce using a given system wide env.. so agent doesn't need to spend time with env. creation
seems like CORS issue in the console logs
ah ok.. anyway to avoid it or change it on my side
that seems like a bit of extra thing a user needs to bother about.. better deployment model should be that its part of api-server deployment and configurable from UI itself.. may be i am asking too much 😛
yeah i still see it.. but that seems to be due to dns address being blocked by our datacenter
i understand.. its just if i have a docker image with correct env.. i would prefer if trains-agent can use that directly
i don't need this right away.. i just wanted to know the possibility fo dividing the current machine into multiple workers... i guess if its not readily available then may be you guys can discuss to see if it makes sense to have it on roadmap..
in the above example task id is from a newly generated task like Task.init()
?
AgitatedDove14 it seems i am having issues when i restart the agent... it fails in creating/setting up the env again... when i clean up the .trains/venv-builds
folder and run a job for agent.. it is able to create the env fine and run job successfully.. when i restart the agent it fails with messages like
` Requirement already satisfied: cffi@ file:///home/conda/feedstock_root/build_artifacts/cffi_1595805535531/work from file:///home/conda/feedstock_root/build_artifacts/cffi_1595805535...
it may be that i am new to trains but in my normal notebook flow they both are images and i as trains user expected it to be under the Plot
section as i think this is an image.. as in nutshell all matplotlib plots display data as an image 🙂
is it because of something wrong with this package build from their owner or something else
i am simply proxying it using ssh port forwarding
as when it run first time after .trains/venv-build
cleaning, it output this message for this package - pycparser @ file:///home/conda/feedstock_root/build_artifacts/pycparser_1593275161868/work
wondering why it specify this package as for most other packages it just prints the version number
ok will report back
i know it support conda.. but i have another system wide env which is not base .. say ml
so wondering if i can comnfigure trains-agent to use that... not standard practice but just asking if it is possible
the use case i have is to allow people from my team to run their workloads on set of servers without stepping over each other..
allegroai/trains
image hash f038c8c6652d
yes delete experiments which are old or for some other reason are not required to keep around
simply changing to show
doesn't work in my case as i am displaying CM.. what about if i use matshow
my use case is more like 1st one where run the training at a certain given schedule
trains is run using docker-compose allegroai/trains-agent-services:latest
and allegroai/trains:latest