Reputation
Badges 1
383 × Eureka!Any chance you can open a github issue on it?
Will do!
Only one param, just playing around
As in run a training experiment, then a test/validation experiment to choose best model etc etc and also have a human validate sample results via annotations all as part of a pipeline
The Optimizer task is taking a lot of time to complete. Is it doing something here:
I have my notebooks in a git repo and partially use nbdev for scripts
As in if there are jobs, first level is new pods, second level is new nodes in the cluster.
And exact output after the package install and stuff:
Environment setup completed successfully Starting Task Execution: None
Ah thanks for the pointer AgitatedDove14
Ok, just my ignorance then? π
In this case, particularly because of pickle protocol version between 3.7 and 3.8
One more help - is this something I can pass as task_overrides when creating a pipeline in add_step?
One thing I am looking at is nbdev from fastai folks
Thanks that works. Had to use Task.completed()
for my version
Thoughts AgitatedDove14 SuccessfulKoala55 ? Some help would be appreciated.
All right got it, will try it out. Thanks for the quick response.
now if dataset1 is updated, i want process to update dataset2
dataset1 -> process -> dataset2
Will try it out. Pretty impressed π
Currently we train from Sagemaker notebooks, push models to S3 and create containers for model serving
Can I switch off git diff (change detection?)
Pushed the changes, not sure if itβs fully right. Do let me know. But functionality is working
AgitatedDove14 - i am disabling pytorch like above but still see auto models . I even see model import when running evaluation from a loaded model
Any reason to not have those as two datasets?
AgitatedDove14 - worked with mutable copy! So was definitely related to the symlinks in some form
Like it said, it works, but goes into the error loop