simply changing to show
doesn't work in my case as i am displaying CM.. what about if i use matshow
so as you say.. i don't think the issue i am seeing is due to this error
or is there any plan to fix it in upcoming release
TimelyPenguin76 yeah when i run matplotlib with show
plots does land under Plot
section... so its mainly then the imshow
part.. i am wondering why the distinction and what is the usual way to emit plots to debug samples
ah ok.. anyway to avoid it or change it on my side
AgitatedDove14 Morning... so what should the value of "upload_uri" to set to, fileserver_url
e.g. http://localhost:8081 ?
so just under models
dir rather than artifact... any way to achieve this or i should just treat it as artifact ?
seems like setting to fileserver_url did the trick
my use case is more like 1st one where run the training at a certain given schedule
couldn't find the licensing price for enterprise version
TimelyPenguin76 also is there any reason for trating show
and imshow
differently
it may be that i am new to trains but in my normal notebook flow they both are images and i as trains user expected it to be under the Plot
section as i think this is an image.. as in nutshell all matplotlib plots display data as an image 🙂
so i was expecting that uploaded model will be for example LightGBM.1104445eca4749f89962669200481397/models/Model%20object/model.pkl
any logs i can check or debug my side
AgitatedDove14 it seems uploading artifact and uploading models are two different things when it comes to treating fileserver... as when i upload artifact it works as expected but when uploading model using outputmodel class, it wants output_uri
path.. wondering how can i as it to store it under the fileserver
like artifacts LightGBM.1104445eca4749f89962669200481397/artifacts/Model%20object/model.pkl
looking at the code https://github.com/allegroai/trains/blob/65a4aa7aa90fc867993cf0d5e36c214e6c044270/trains/model.py#L1146 this happens when storage_uri is not defined where as i have this under trains.conf
so task should have it ?
thanks Martin.. at least something to go with.. as if i have any issue then i know which component logs to look for
whereas i am using simple matplotlib now
AgitatedDove14 when using OutputModel(task, name='LightGBM model', framework='LightGBM').update_weights(f"{args.out}/model.pkl")
i am seeing this in the logs No output storage destination defined, registering local model /tmp/model.pkl
when i got to trains UI.. i see the model name and details but when i try to download it point to the path file:///tmp/model.pkl
which is incorrect wondering how to fix it
as i am seeing now my plots but they are lending into metrics section not plot section.
yeah i still see it.. but that seems to be due to dns address being blocked by our datacenter
AgitatedDove14 it seems i am having issues when i restart the agent... it fails in creating/setting up the env again... when i clean up the .trains/venv-builds
folder and run a job for agent.. it is able to create the env fine and run job successfully.. when i restart the agent it fails with messages like
` Requirement already satisfied: cffi@ file:///home/conda/feedstock_root/build_artifacts/cffi_1595805535531/work from file:///home/conda/feedstock_root/build_artifacts/cffi_1595805535...
as when it run first time after .trains/venv-build
cleaning, it output this message for this package - pycparser @ file:///home/conda/feedstock_root/build_artifacts/pycparser_1593275161868/work
wondering why it specify this package as for most other packages it just prints the version number
is it because of something wrong with this package build from their owner or something else
thanks for letting me know.. but it turns out after i have recreated my whole system environment from scratch, trains agent is working as expected..
its not that they are blank.. whole page is blank including plotly plots
trains is run using docker-compose allegroai/trains-agent-services:latest
and allegroai/trains:latest
this is when executed from directly with task.init()