Reputation
Badges 1
533 × Eureka!I really don't know, as you can see in my last screenshot, I've configured my base image to be 10.1
This is the pip freeze
of the environment I don't know why it differs from what the agent has... the agent only has a subset of these google libs
I'm really confused, I'm not sure what is wrong and what is the relationship between the templates the agent and all of those thing
For the meantime, I'm giving up on the pipeline thing and I'll write a bash script to orchestrate the execution, because I need to deliver and I'm not feeling this is going anywhere
On an end note I'd love for this to work as expected, I'm not sure what you need from me. A fully reproducible example will be hard because obviously this is proprietary code. What ...
I think I got it, I'll ping her again if it won't succeed
TimelyPenguin76
As a part of a repo
I only want to save it as a template so I can later call it in a pipeline
the output above is what the agent has as it seems... obviously on my machine I have it installed
I never installed trains on this environment
no this is from the task execution that failed
pgrep -af trains
shows that there is nothing running with that name
can you tell me which API call exactly are you using for spinning up? I would like to debug and try to use boto3
myself in order to spin up an instance, so I can understand where the problem is coming from
glad I managed to help back in some way
now I get this error in my Auto Scaler taskWarning! exception occurred: An error occurred (AuthFailure) when calling the RunInstances operation: AWS was not able to validate the provided access credentials Retry in 15 seconds
Now I remind you that using the same credentials exactly, the auto scaler task could launch instances before
and when looking at the running task, I still see the credentials
FriendlySquid61
Just updating, I still haven't touched this.... I did not consider the time it would take me to set up the auto scaling, so I must attend other issues now, I hope to get back to this soon and make it work
Yep, the trains server is basically a docker-compose based service.
All you have to do is change the ports in the docker-compose.yml
file.
If you followed the instructions in the docs you should find that file in /opt/trains/docker-compose.yml
and then you will see that there are multiple services ( apiserver
, elasticsearch
, redis
etc.) and in each there might be a section called ports
which then states the mapping of the ports.
The number on the left, is ...
but nowhere in the docs does it say anything about the permissions for the IAM
and also in the extra_vm_bash_script
variables, I ahve them under export TRAINS_API_ACCESS_KEY
and export TRAINS_API_SECRET_KEY