Reputation
Badges 1
533 × Eureka!I think you are talking about separate problems - the "WARNING DIFF IS TOO LARGE" is only a UI issue, that you can't see hte diff in the UI - correct me if I'm wrong with this
Maria seems to be saying that the execution FAILS when she has uncomitted changes, which is not the expected behavior - am I right maria?
Do you have any idea as to why does that happen SuccessfulKoala55
not manually I assume that if I deleted the image, and then docker-composed up, and I can see the pull working it should pull the correct one
(I'm working with maria)
essentially, what maria says is when she has a script with uncomitted changes, when executing remotely, the script that actually runs on the remote machine is without the uncomitted changes
e.g.:
Her git status
is clean, she makes some changes to script.py
and executes it remotely. What gets executed remotely is the original script.py
and not the modified version she has locally
is it possible to access the children tasks of the pipeline from the pipeline object?
This is a part of a bigger process which times quite some time and resources, I hope I can try this soon if this will help get to the bottom of this
Cool, now I understand the auto detection better
I believe that is why MetaFlow chose conda
as their package manager, because it can take care of these kind of dependencies (even though I hate conda 😄 )
Maybe even a dedicated argument specifically for apt-get
packages, since it is very common to need stuff like that
The scenario I'm going for is never to run on the dev machine, so all I'll need to do once the server + agents are up is to add task.execute_remotely...
after the Task.init
line and after the execution of the script is called on the dev machine, it won't actually run but rather enqueue itself for the agent to run it?
Continuing on this line of thought... Is it possible to call task.execute_remotely
on a CPU only machine (data scientists' laptop for example) and make the agent that fetches this task to run it using GPU? I'm asking that because it is mentioned that it replicates the running environment on the task creator... which is exactly what I'm not trying to do 😄
Cool - so that means the fileserver which comes with the host will stay emtpy? Or is there anything else being stored there?
I doubled checked the credentials in the configurations, and they have full EC2 access
Actually I removed the key pair, as you said it wasn't a must in the newer versions
no need to do it again, I ahve all the settings in place, I'm sure it's not a settings thing
So just to correct myself and sum up, the credentials for AWS are only in the cloud_credentials_*
I have them in two different places, once under Hyperparameters -> General