
Reputation
Badges 1
533 × Eureka!Is there a way to do so without touching the config? directly through the Task object?
Now I see the watermarks are 2gb
I guess the AMI auto updated
why does it deplete so fast?
Increased to 20, lets see how long will it last 🙂
I'm asking that because the DSes we have are working on multiple projects, and they have only one trains.conf
file, I wouldn't want them to edit it each time they switch project
Okay so that is a bit complicated
In our setup, the DSes don't really care about agents, the agents are being managed by our MLops team.
So essentially if you imagine it the use case looks like that:
A data scientists wants to execute some CPU heavy task. The MLops team supplied him with a queue name, and the data scientist knows that when he needs something heavy he pushes it there - the DS doesn't know nothing about where it is executed, the execution environment is fully managed by the ML...
I want to get the instances of the tasks executed by this controller task
How do I get all children tasks given a parent?
inference table is a pandas dataframe
I re-executed the experiemnt, nothing changes
I'm using iteration = 0 at the moment, and I "choose" the max and it shows as a column... But the column is not the scalar name (because it cuts it and puts the >
sign to signal max).
For the sake of comparing and sorting, it makes sense to log a scalar with a given name without the iteration dimension
By the examples I figured out this ould appear as a scatter plot with X and Y axis and one point only.. Does it avoid that?
What about permissions to the machines that are being spun up? For exampel if I want the instances to have specific permissions to read/write to S3 for example, how do I mange those?
:face_palm: 🤔 :man-tipping-hand:
So once I enqueue it is up? Docs says I can configure the queues that the auto scaler listens to in order to spin up instances, inside the auto scale task - I wanted to make sure that this config has nothing to do to where the auto scale task was enqueued to
I doubled checked the credentials in the configurations, and they have full EC2 access
AgitatedDove14 just a reminder if you missed this question 😄
No absolutely not. Yes I do have a GOOGLE_APPLICATION_CREDENTIALS environment variable set, but nowhere do we save anything to GCS. The only usage is in the code which reads from BigQuery
In standard docker TimelyPenguin76 this quoting you mentioned is wrong, since the whole argument is being passed - hence the double tricky quotation I posted above
does the services mode have a separate configuration for base image?
my current version of the images used:
I dont think that has to do anything with the value zero, the lines that should come out of 'mean' and 'median' have the value of None under quantile, but have a dre_0.5 assoxiated with them. those lines appear in the notebook and not in the ui