Reputation
Badges 1
85 × Eureka!as when it run first time after .trains/venv-build
cleaning, it output this message for this package - pycparser @ file:///home/conda/feedstock_root/build_artifacts/pycparser_1593275161868/work
wondering why it specify this package as for most other packages it just prints the version number
my use case is more like 1st one where run the training at a certain given schedule
any example in the repo which i can go through
ok so controller task is a simple place holder which run infinitely and fetch a task template and queue it..
looking at the code https://github.com/allegroai/trains/blob/65a4aa7aa90fc867993cf0d5e36c214e6c044270/trains/model.py#L1146 this happens when storage_uri is not defined where as i have this under trains.conf
so task should have it ?
thanks AgitatedDove14 for the links.. seems like i might try first one if it works out .. before going the route to create a full framework support as in our case team uses multiple different frameworks
AgitatedDove14 it seems uploading artifact and uploading models are two different things when it comes to treating fileserver... as when i upload artifact it works as expected but when uploading model using outputmodel class, it wants output_uri
path.. wondering how can i as it to store it under the fileserver
like artifacts LightGBM.1104445eca4749f89962669200481397/artifacts/Model%20object/model.pkl
so i was expecting that uploaded model will be for example LightGBM.1104445eca4749f89962669200481397/models/Model%20object/model.pkl
AgitatedDove14 Morning... so what should the value of "upload_uri" to set to, fileserver_url
e.g. http://localhost:8081 ?
so just under models
dir rather than artifact... any way to achieve this or i should just treat it as artifact ?
seems like setting to fileserver_url did the trick
AgitatedDove14 it seems i am having issues when i restart the agent... it fails in creating/setting up the env again... when i clean up the .trains/venv-builds
folder and run a job for agent.. it is able to create the env fine and run job successfully.. when i restart the agent it fails with messages like
` Requirement already satisfied: cffi@ file:///home/conda/feedstock_root/build_artifacts/cffi_1595805535531/work from file:///home/conda/feedstock_root/build_artifacts/cffi_1595805535...
thanks for the update... it seems currently i can not pass the http/s proxy parameters as when agent creates a new env and try to download some package its being blocked by our corp firewall... all outgoing connection needs to pass through a proxy.. so is it possible to specify that or environment variables to agent
ok... is there any way to enforce using a given system wide env.. so agent doesn't need to spend time with env. creation
thanks for letting me know.. but it turns out after i have recreated my whole system environment from scratch, trains agent is working as expected..
is it because of something wrong with this package build from their owner or something else
AgitatedDove14 sorry having issues on my side to connect to server to test it.. but directory structure when i execute the command is like thisDirectory layout: ~/test/scripts/script.py ~$ python -m test.scripts.script --args
i can not check the working directory today due to vpn issues in accessing server but script path was -m test.scripts
it was missing script
from it
couldn't find the licensing price for enterprise version
it still tries to create a new env
ok will report back
also one thing i noticed.. when i report confusion matrix and some other plots e.g. seaborn with matplotlib.. on server side i can the plots are there but not visible at all
once i removed the seaborn plot then CM plots becomes visible again
its not that they are blank.. whole page is blank including plotly plots
trains is run using docker-compose allegroai/trains-agent-services:latest
and allegroai/trains:latest
this is when executed from directly with task.init()