so i was expecting that uploaded model will be for example LightGBM.1104445eca4749f89962669200481397/models/Model%20object/model.pkl
thanks AgitatedDove14 for the links.. seems like i might try first one if it works out .. before going the route to create a full framework support as in our case team uses multiple different frameworks
i know its not magic... all linux subsystem underneath.. just to configure it in a way as needed 🙂 for now i think i will stick with current setup of cpu-only mode and co-ordinate with in the team. later one when need comes .. will see if we go for k8s or not
thanks for the update... it seems currently i can not pass the http/s proxy parameters as when agent creates a new env and try to download some package its being blocked by our corp firewall... all outgoing connection needs to pass through a proxy.. so is it possible to specify that or environment variables to agent
as when it run first time after .trains/venv-build cleaning, it output this message for this package - pycparser @ file:///home/conda/feedstock_root/build_artifacts/pycparser_1593275161868/work wondering why it specify this package as for most other packages it just prints the version number
i ran it this week
AgitatedDove14 Morning... so what should the value of "upload_uri" to set to, fileserver_url e.g. http://localhost:8081 ?
once i removed the seaborn plot then CM plots becomes visible again
that seems like a bit of extra thing a user needs to bother about.. better deployment model should be that its part of api-server deployment and configurable from UI itself.. may be i am asking too much 😛
TimelyPenguin76 is there any way to do this using UI directly or as a schedule... otherwise i think i will run the cleanup_service as given in docs...
this is when executed from directly with task.init()
i am simply proxying it using ssh port forwarding
you replied it already.. it was execute_remotely called with exit_true argument
seems like CORS issue in the console logs
allegroai/trains image hash f038c8c6652d
there are multiple scripts under test/scripts folder.. example is running one script from that folder
trains is run using docker-compose allegroai/trains-agent-services:latest and allegroai/trains:latest
not just fairness but the scheduled workloads will be starved of resources if say someone run training which by default take all the available cpus
so just under models dir rather than artifact... any way to achieve this or i should just treat it as artifact ?
seems like setting to fileserver_url did the trick
yeah i still see it.. but that seems to be due to dns address being blocked by our datacenter
my use case is more like 1st one where run the training at a certain given schedule
its not that they are blank.. whole page is blank including plotly plots
AgitatedDove14 no it doesn't work
seems like if i remove the plt.figure(figsize=(16, 8)) i start to see the figure title but not figure itself
