Reputation
Badges 1
533 × Eureka!I'll check if this works tomorrow
The weirdest thing, is that the execution is "completed" but it actually failed
Legit, if you have a cached_file (i.e. exists and accessible), you can return it to the caller
I agree, so shouldn't it be if cached_file: return cached_file
instead of if not cached_file: return cached_file
BTW is the if not cached_file: return cached_file
is legit or a bug?
I mean usually it would read if cached_file: return cached_file
In the larger context I'd look on how other object stores treat similar problems, I'm not that advanced in these topics.
But adding a simple force_download
flag to the get_local_copy
method could solve many cases I can think of, for example I'd set it to true in my case as I don't mind the times it will re-download when not necessary as it is quite small (currently I always delete the local file, but it looks pretty ugly)
I might, I'll look at the internals later cause at a glance I didn't really get the logic inside get_local_copy
... the if
there is ending with if ... not cached_file: return cached_file
which from reading doesn't make much sense
let me repay you with a nice trick
I assume that at some points in the execution, the client (where the task is running) is sending JSONs to the mongo service, and that is what we see in the web UI.
Since we are talking about a case where there is no internet available, maybe these could be dumped into files/stdout and let the user manually insert them.
The manual insertion UX could be something like a CLI copy-paste or and endpoint for files - but since your UX is so good ( 🙂 ) I'm sure you'll figure this part out better
Okay, so let me get this straight
The autoscaling is basically an ever-running task (lets say on the services
queue). Now, the actual auto scaling and which queues exist have nothign to do with that, and are configured in the auto scale task?
First of all I wasn't aware that was an option - but I think it's preferable to be able to do it through the command line. Because I'm developing the pipeline to be executed remotely, but for debugging I run it locally.
Using what you showed I can obviously write it, and delete it once it is ready, and rewrite it when I'm debugging or adding features - but I think DX-wise it would be nicer to be able to trigger this functionality through the command line
glad I managed to help back in some way
The scenario I'm going for is never to run on the dev machine, so all I'll need to do once the server + agents are up is to add task.execute_remotely...
after the Task.init
line and after the execution of the script is called on the dev machine, it won't actually run but rather enqueue itself for the agent to run it?
I was sure you are on Israel times as well, sorry for the night time thing 😄
The latest, I curl
ed the docker-compose like 10 minutes ago
can you tell me which API call exactly are you using for spinning up? I would like to debug and try to use boto3
myself in order to spin up an instance, so I can understand where the problem is coming from
no need to do it again, I ahve all the settings in place, I'm sure it's not a settings thing
So just to correct myself and sum up, the credentials for AWS are only in the cloud_credentials_*
I doubled checked the credentials in the configurations, and they have full EC2 access
Now I remind you that using the same credentials exactly, the auto scaler task could launch instances before