
Reputation
Badges 1
85 × Eureka!its not that they are blank.. whole page is blank including plotly plots
i am simply proxying it using ssh port forwarding
it may be that i am new to trains but in my normal notebook flow they both are images and i as trains user expected it to be under the Plot
section as i think this is an image.. as in nutshell all matplotlib plots display data as an image 🙂
this is when executed from directly with task.init()
yeah i still see it.. but that seems to be due to dns address being blocked by our datacenter
yes delete experiments which are old or for some other reason are not required to keep around
whereas i am using simple matplotlib now
TimelyPenguin76 also is there any reason for trating show
and imshow
differently
thanks Martin.. at least something to go with.. as if i have any issue then i know which component logs to look for
AgitatedDove14 when using OutputModel(task, name='LightGBM model', framework='LightGBM').update_weights(f"{args.out}/model.pkl")
i am seeing this in the logs No output storage destination defined, registering local model /tmp/model.pkl
when i got to trains UI.. i see the model name and details but when i try to download it point to the path file:///tmp/model.pkl
which is incorrect wondering how to fix it
as if its couple of weeks away.. i can wait
as when it run first time after .trains/venv-build
cleaning, it output this message for this package - pycparser @ file:///home/conda/feedstock_root/build_artifacts/pycparser_1593275161868/work
wondering why it specify this package as for most other packages it just prints the version number
not so sure.. ideally i was looking for some function calls which enables me to create a sort of DAG which get scheduled at given interval and DAG has status checks on up streams task ... so if upstream task fails.. downstream tasks are not run
any logs i can check or debug my side
an example achieving what i propose would be greatly helpful
as i am seeing now my plots but they are lending into metrics section not plot section.
allegroai/trains
image hash f038c8c6652d
ok... is there any way to enforce using a given system wide env.. so agent doesn't need to spend time with env. creation
looking at the above link, it seems i might be able to create it with some boilerplate as it has concept of parent and child... but not sure how status checks and dependency get sorted out
or is there any plan to fix it in upcoming release
seems like if i remove the plt.figure(figsize=(16, 8))
i start to see the figure title but not figure itself
so i was expecting that uploaded model will be for example LightGBM.1104445eca4749f89962669200481397/models/Model%20object/model.pkl
trains-agent version as mentioned is 0.16.1 and server is 0.16.1 as well
AgitatedDove14 Morning... so what should the value of "upload_uri" to set to, fileserver_url
e.g. http://localhost:8081 ?
AgitatedDove14 it seems i am having issues when i restart the agent... it fails in creating/setting up the env again... when i clean up the .trains/venv-builds
folder and run a job for agent.. it is able to create the env fine and run job successfully.. when i restart the agent it fails with messages like
` Requirement already satisfied: cffi@ file:///home/conda/feedstock_root/build_artifacts/cffi_1595805535531/work from file:///home/conda/feedstock_root/build_artifacts/cffi_1595805535...
simply changing to show
doesn't work in my case as i am displaying CM.. what about if i use matshow
thanks for letting me know.. but it turns out after i have recreated my whole system environment from scratch, trains agent is working as expected..
AgitatedDove14 sorry having issues on my side to connect to server to test it.. but directory structure when i execute the command is like thisDirectory layout: ~/test/scripts/script.py ~$ python -m test.scripts.script --args
i know it support conda.. but i have another system wide env which is not base .. say ml
so wondering if i can comnfigure trains-agent to use that... not standard practice but just asking if it is possible