Reputation
Badges 1
85 × Eureka!i ran it this week
seems like if i remove the plt.figure(figsize=(16, 8))
i start to see the figure title but not figure itself
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at
http://localhost:8081/Trains%20Test/LightGBM.56ca0c9c9ebf4800b7e4f537295d942c/metrics/LightGBM%20Feature%20Importance%20above%200.0%20threshold/plot%20image/LightGBM%20Feature%20Importance%20above%200.0%20threshold_plot%20image_00000000.png . (Reason: CORS request did not succeed).
seems like CORS issue in the console logs
seems like port forwarding had an issue.. fixed that.. now running test again to see if things workout as expected
i mean linking more in UI.. as when i go to model detail page, i can see that a given experiment created this model and click on that to see its detail... so something similar to that for ensemble models
while you guys gonna work on it.. just a small feature addition to it.. it would be cool to have a DAG figure which shows how models are linked under this task and ability to just click a circle in that DAG figure to navigate to given task... i think it will be very useful UX 🙂
look forward to the new job workflow part in 0.16 then 🙂
ok... is there any way to enforce using a given system wide env.. so agent doesn't need to spend time with env. creation
thanks... i was just wondering if i overlooked any config option for that... as cpu_set
might be possibility to for cpu
i don't need this right away.. i just wanted to know the possibility fo dividing the current machine into multiple workers... i guess if its not readily available then may be you guys can discuss to see if it makes sense to have it on roadmap..
i know its not magic... all linux subsystem underneath.. just to configure it in a way as needed 🙂 for now i think i will stick with current setup of cpu-only mode and co-ordinate with in the team. later one when need comes .. will see if we go for k8s or not
i can not check the working directory today due to vpn issues in accessing server but script path was -m test.scripts
it was missing script
from it
i guess i was not so clear may be.. say e.g. you running lightgbm model training, by default it will take all the cpus available on the box and will run that many threads, now another task got scheduled on the same box now you have 2x threads with same amount of CPU to schedule on. So yes the jobs will progress but the progression will not be the same due to context switches which will happen way more than say if we have allowed on 1/2x threads for each job
thanks AgitatedDove14 for the links.. seems like i might try first one if it works out .. before going the route to create a full framework support as in our case team uses multiple different frameworks
not so sure.. ideally i was looking for some function calls which enables me to create a sort of DAG which get scheduled at given interval and DAG has status checks on up streams task ... so if upstream task fails.. downstream tasks are not run
an example achieving what i propose would be greatly helpful
as if its couple of weeks away.. i can wait
this looks good... also do you have any info/eta on next controller/service release you mentioning
in the above example task id is from a newly generated task like Task.init()
?
it still tries to create a new env
i know it support conda.. but i have another system wide env which is not base .. say ml
so wondering if i can comnfigure trains-agent to use that... not standard practice but just asking if it is possible
ok will report back
you replied it already.. it was execute_remotely
called with exit_true
argument