so basically - if she has new commits locally that werent pushed it won't work
But if she did not commit her latest changes, and now she enqueues - it will work?
If this includes scheduling through pipelines, in my opinion there should be an option to execute a pipeline without an agent. Sometimes for development I just want to execute a pipeline on my local machine just as I would a task...
I mean usually it would read if cached_file: return cached_file
its like ps
+ grep
together 😄
Maybe even a dedicated argument specifically for apt-get
packages, since it is very common to need stuff like that
I might, I'll look at the internals later cause at a glance I didn't really get the logic inside get_local_copy
... the if
there is ending with if ... not cached_file: return cached_file
which from reading doesn't make much sense
BTW is the if not cached_file: return cached_file
is legit or a bug?
Oh I get it, that also makes sense with the docs directing this at inference jobs and avoiding GPU - because of the 1-N thing
the worst part of debugging this is waiting for the docker to install tensorflow each time over and over again 😞
AgitatedDove14 this is stillnot fixed for me, even though I upgraded to server 1.1... Does the client require an update as well? Should I open an issue about this?
so putting the docs aside, what permissions should I give to the IAM associated with trains' autoscale ?
I think you are talking about separate problems - the "WARNING DIFF IS TOO LARGE" is only a UI issue, that you can't see hte diff in the UI - correct me if I'm wrong with this
Maria seems to be saying that the execution FAILS when she has uncomitted changes, which is not the expected behavior - am I right maria?
(I'm working with maria)
essentially, what maria says is when she has a script with uncomitted changes, when executing remotely, the script that actually runs on the remote machine is without the uncomitted changes
e.g.:
Her git status
is clean, she makes some changes to script.py
and executes it remotely. What gets executed remotely is the original script.py
and not the modified version she has locally
I assume that at some points in the execution, the client (where the task is running) is sending JSONs to the mongo service, and that is what we see in the web UI.
Since we are talking about a case where there is no internet available, maybe these could be dumped into files/stdout and let the user manually insert them.
The manual insertion UX could be something like a CLI copy-paste or and endpoint for files - but since your UX is so good ( 🙂 ) I'm sure you'll figure this part out better
alabaster==0.7.12 appdirs==1.4.4 apturl==0.5.2 attrs==21.2.0 Babel==2.9.1 bcrypt==3.1.7 blinker==1.4 Brlapi==0.7.0 cachetools==4.0.0 certifi==2019.11.28 chardet==3.0.4 chrome-gnome-shell==0.0.0 clearml==1.0.5 click==8.0.1 cloud-sptheme==1.10.1.post20200504175005 cloudpickle==1.6.0 colorama==0.4.3 command-not-found==0.3
Another thing I noticed now it happens on my personal computer, when I execute the same pipeline from the exact same commit with exact same data on another host it works without these problems
Cool, now I understand the auto detection better
yeah I guessed so
what should I paste here to diagnose it?