Reputation
Badges 1
85 × Eureka!i know it support conda.. but i have another system wide env which is not base .. say ml
so wondering if i can comnfigure trains-agent to use that... not standard practice but just asking if it is possible
it still tries to create a new env
ok will report back
couldn't find the licensing price for enterprise version
that seems like a bit of extra thing a user needs to bother about.. better deployment model should be that its part of api-server deployment and configurable from UI itself.. may be i am asking too much 😛
yes delete experiments which are old or for some other reason are not required to keep around
i think for now it should do the trick... was just thinking about the roadmap part
i can not check the working directory today due to vpn issues in accessing server but script path was -m test.scripts
it was missing script
from it
test package is not installed but its in the current working directory
trains-agent version as mentioned is 0.16.1 and server is 0.16.1 as well
AgitatedDove14 no it doesn't work
TimelyPenguin76 is there any way to do this using UI directly or as a schedule... otherwise i think i will run the cleanup_service as given in docs...
any example in the repo which i can go through
looking at the code https://github.com/allegroai/trains/blob/65a4aa7aa90fc867993cf0d5e36c214e6c044270/trains/model.py#L1146 this happens when storage_uri is not defined where as i have this under trains.conf
so task should have it ?
AgitatedDove14 Morning... so what should the value of "upload_uri" to set to, fileserver_url
e.g. http://localhost:8081 ?
so just under models
dir rather than artifact... any way to achieve this or i should just treat it as artifact ?
AgitatedDove14 when using OutputModel(task, name='LightGBM model', framework='LightGBM').update_weights(f"{args.out}/model.pkl")
i am seeing this in the logs No output storage destination defined, registering local model /tmp/model.pkl
when i got to trains UI.. i see the model name and details but when i try to download it point to the path file:///tmp/model.pkl
which is incorrect wondering how to fix it
thanks AgitatedDove14 for the links.. seems like i might try first one if it works out .. before going the route to create a full framework support as in our case team uses multiple different frameworks
so i was expecting that uploaded model will be for example LightGBM.1104445eca4749f89962669200481397/models/Model%20object/model.pkl
seems like setting to fileserver_url did the trick
not just fairness but the scheduled workloads will be starved of resources if say someone run training which by default take all the available cpus
its not that they are blank.. whole page is blank including plotly plots
thanks for the update... it seems currently i can not pass the http/s proxy parameters as when agent creates a new env and try to download some package its being blocked by our corp firewall... all outgoing connection needs to pass through a proxy.. so is it possible to specify that or environment variables to agent
ok... is there any way to enforce using a given system wide env.. so agent doesn't need to spend time with env. creation
thanks for letting me know.. but it turns out after i have recreated my whole system environment from scratch, trains agent is working as expected..
i understand.. its just if i have a docker image with correct env.. i would prefer if trains-agent can use that directly
as when it run first time after .trains/venv-build
cleaning, it output this message for this package - pycparser @ file:///home/conda/feedstock_root/build_artifacts/pycparser_1593275161868/work
wondering why it specify this package as for most other packages it just prints the version number