
Reputation
Badges 1
42 × Eureka!Hey, I tried doing that but sadly it doesn't seem to work. As suggested by the ECR docs, I added:aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin <ECR URI>
To the extra_vm_bash_script
in the config file. I even added a docker pull
which I think worked (because it took much longer for the instances to spin up), but I still got the same error message 😞 Is there any way to debug these sessions through clearml? Thanks!
something needs to run the autoscaler, I thought it would be the machine that runs the services queue, no?
when I ran the script it autogenerated the YAML, so I should manually copy it to the remote services agents?
So apparently the NVIDIA AMI https://aws.amazon.com/marketplace/pp/prodview-e7zxdqduz4cbs
doesn't have the aws-cli
installed. So I install it in the extra_vm_bash_script
and now it wants a configuration. Is there any way to get that from the ENV vars you create? Do you think I should create my own AMI just for this?
Hey AgitatedDove14 thanks, that works! The docker is now up and running, great success.
I have a follow up, maybe you can help debug. Now for some reason git clone
doesn't work through the agent, but if I login myself to the machine and run the same command I see that fails in the log it works. The error I see is:
` cloning: git@gitlab.com:<repo_path>
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Reposito...
Thanks! A followup question - can I make the steps in the pipeline use the latest commit in the branch?
Is there an option to do this from a pipeline, from within the add_step
method? Can you link a reference to cloning and editing a task programmatically? nope, it works well for the pipeline when not I don't choose to continue_pipeline
I have access to the machine using SSH from my computer.
There doesn't seem to be any other error in the debug mode.
` Remote machine is ready
Setting up connection to remote session
Starting SSH tunnel
SSH tunneling failed, retrying in 3 seconds
Starting SSH tunnel `
what about using ENV variables? is it possible to override the config file's credentials?
Also, tried the continue_pipeline option, didn't work as it couldn't parse the previous step that run...ValueError: Could not parse reference '${run_experiment.models.output.-1.url}', step run_experiment could not be found