Reputation
Badges 1
533 × Eureka!This is a part of a bigger process which times quite some time and resources, I hope I can try this soon if this will help get to the bottom of this
Another Q on that - does pyhocon allows me to edit the file while keeping the comments in place?
so basically - if she has new commits locally that werent pushed it won't work
But if she did not commit her latest changes, and now she enqueues - it will work?
doesn't contain the number 4
So once I enqueue it is up? Docs says I can configure the queues that the auto scaler listens to in order to spin up instances, inside the auto scale task - I wanted to make sure that this config has nothing to do to where the auto scale task was enqueued to
I think I got it, I'll ping her again if it won't succeed
AgitatedDove14 worked like a charm, thanks a lot!
it seems apiserver_conf doesn't even change
how do I run this wizard? is this wizard train's or aws's?
Trains docs have at no point any mention on what should I do on the AWS interface... So I'm not sure at what point I should encounter this wizard
I'm going to play with it a bit and see if I can figure out how to make it work
That's awesome, but my problem right now is that I have my own cronjob deleting the contents of /tmp each interval, and it deletes the cfg files... So I understand I must skip deleting them from now on
So how do I solve the problem? Should I just relaunch the agents? Because they can't execute jobs now
I'll tr yto work with that
Legit, if you have a cached_file (i.e. exists and accessible), you can return it to the caller
I agree, so shouldn't it be if cached_file: return cached_file instead of if not cached_file: return cached_file
nvidia/cuda:10.1-base-ubuntu18.04
AgitatedDove14 permanent. I want to start with a CLI interface that allows me add users to the trains server
AgitatedDove14 sorry for the late reply,
It's right after executing all the steps. So we have the following block which determines whether we run locally or remotely
if not arguments.enqueue: pipe.start_locally(run_pipeline_steps_locally=True) else: pipe.start(queue=arguments.enqueue)
And right after we have a method that calls Task.current_task() which returns None
cluster.routing.allocation.disk.watermark.low:
🤔 is the "installed packages" part editable? good to know
Isn't it a bit risky manually changing a package version? what if it won't be compatible with the rest?